comp.lang.ada
 help / color / mirror / Atom feed
* Matrix Multiplication
@ 1999-12-14  0:00 William Dale
  1999-12-14  0:00 ` Gautier
                   ` (5 more replies)
  0 siblings, 6 replies; 81+ messages in thread
From: William Dale @ 1999-12-14  0:00 UTC (permalink / raw)


OK, 
I was challenged by one of my co-workers - a control and guidance type -
not a real software engineer. 

His claim  : "Fortran is faster doing floating point multiplication than
Ada" 

I could not get any other specifications such as hardware, particular
compiler, version, vendor, special math libraries or any other
equivocations.  Just he 
claims the above in all cases. 

So could I get some help getting times for a Matrix inversion on a 50X50
flotaing point matrix in both languages.  Anyone already done these
types of tests? Any sugestions on special libraries that he may not know
about to do such things in Ada. Obviously he has the Fortan side nailed. 

I know it is silly but this is the kind of FUD that gets thown around
all the time.  Either language could win - it depends on many of the
above issues.  

Thanks!
Bill Dale
mailto:william.dale.jr@lmco.com




^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Matrix Multiplication
  1999-12-14  0:00 William Dale
@ 1999-12-14  0:00 ` Gautier
       [not found]   ` <5l1f5s4kck891a2s6o8bhvkirm4q79hm6c@4ax.com>
  1999-12-15  0:00   ` Gautier
  1999-12-14  0:00 ` David C. Hoos, Sr.
                   ` (4 subsequent siblings)
  5 siblings, 2 replies; 81+ messages in thread
From: Gautier @ 1999-12-14  0:00 UTC (permalink / raw)


> OK,
> I was challenged by one of my co-workers - a control and guidance type -
> not a real software engineer.
> 
> His claim  : "Fortran is faster doing floating point multiplication than
> Ada"
> 
> I could not get any other specifications such as hardware, particular
> compiler, version, vendor, special math libraries or any other
> equivocations.  Just he
> claims the above in all cases.
> 
> So could I get some help getting times for a Matrix inversion on a 50X50
> flotaing point matrix in both languages.  Anyone already done these
> types of tests? Any sugestions on special libraries that he may not know
> about to do such things in Ada. Obviously he has the Fortan side nailed.

I have done a comparison, for sparse matrix computations, where
matrix multiplications do not depend on library code, so you compare
really Ada and Fortran. Compiled on DEC Ada & DEC Fortran, both
give almost (some percents) the same timing - even though generic
code is used in Ada to match single and double precision, and
the Ada code is not optimal.
This is an equivalent code comparison.
For a large scale numerics project, procedure encapsulation
subtyping, exceptions in Ada save a lot of parameter passing,
error codes, redimensioning everything everywhere (with errors...)
compared to Fortran - this is beside the floating-point part
strictly speaking. This has to be proven, but my conjecture
is that one can produce Ada code much friendlier with processor cache,
hence faster than Fortran. Well-placed "renames" or "inline" pragmas
produce astonishing effects, too.

> I know it is silly but this is the kind of FUD that gets thown around
> all the time.  Either language could win - it depends on many of the
> above issues.

Claims of this sort are never verified and the rule is to compare apples
with patatoes... That is the problem with computing (and humans...). Maybe
the things were more `visual' in the steam / electricity choices - although...

NB: for a fair comparison don't forget to suppress all the standard Ada
checkings at compile-time...

-- 
Gautier

_____\\________________\_______\
http://members.xoom.com/gdemont/




^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Matrix Multiplication
  1999-12-15  0:00     ` E. Robert Tisdale
@ 1999-12-14  0:00       ` Richard Maine
  0 siblings, 0 replies; 81+ messages in thread
From: Richard Maine @ 1999-12-14  0:00 UTC (permalink / raw)


"E. Robert Tisdale" <edwin@netwood.net> writes:

> That's even easier.  The Ada version could call the f90 intrinsic too.
> All you would need to do is link the f90 library which contains matmul.

I will not get into the language comparison or benchmarking aspects of
this thread.

I just note that "linking the f90 library that contains matmul" isn't
necessarily that straightforward.  Matmul is an intrinsic.  It is not
at all given  that there even *IS* a library that contains it; a compiler
is quite free to always do it inline.  And even if much of the work is
in a library routine, the interface to them isn't necessarily known
outside of the compiler.

Intrinsics are basically part of the compiler internals.  They *MAY* be
implemented with callable library routines, but there is no guarantee of that.
And it certainly isn't a portable way to call them.

-- 
Richard Maine
maine@qnet.com




^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Matrix Multiplication
  1999-12-14  0:00 William Dale
  1999-12-14  0:00 ` Gautier
@ 1999-12-14  0:00 ` David C. Hoos, Sr.
  1999-12-15  0:00 ` Greg Lindahl
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 81+ messages in thread
From: David C. Hoos, Sr. @ 1999-12-14  0:00 UTC (permalink / raw)



William Dale <william.dale.jr@lmco.com> wrote in message
news:385699B5.59C14D03@lmco.com...
> OK,
> I was challenged by one of my co-workers - a control and guidance type -
> not a real software engineer.
>
> His claim  : "Fortran is faster doing floating point multiplication than
> Ada"
>
> I could not get any other specifications such as hardware, particular
> compiler, version, vendor, special math libraries or any other
> equivocations.  Just he
> claims the above in all cases.
>
> So could I get some help getting times for a Matrix inversion on a 50X50
> flotaing point matrix in both languages.  Anyone already done these
> types of tests? Any sugestions on special libraries that he may not know
> about to do such things in Ada. Obviously he has the Fortan side nailed.
>
> I know it is silly but this is the kind of FUD that gets thown around
> all the time.  Either language could win - it depends on many of the
> above issues.

Well... any meaningful comparison could only be drawn if both subprograms
were implementing the same algorithm.






^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Matrix Multiplication
       [not found] ` <01bf4708$99ef98f0$022a6282@dieppe>
@ 1999-12-15  0:00   ` Gautier
  1999-12-15  0:00   ` Robert A Duff
  1 sibling, 0 replies; 81+ messages in thread
From: Gautier @ 1999-12-15  0:00 UTC (permalink / raw)


> Well I don't know what is a fast or slow language !!

An example: Swiss French is slower than Parisian French.
Ah, no, sorry - it's a difference in compilers...

G.




^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Matrix Multiplication
       [not found] ` <01bf4708$99ef98f0$022a6282@dieppe>
  1999-12-15  0:00   ` Gautier
@ 1999-12-15  0:00   ` Robert A Duff
  1999-12-15  0:00     ` Marin D. Condic
                       ` (2 more replies)
  1 sibling, 3 replies; 81+ messages in thread
From: Robert A Duff @ 1999-12-15  0:00 UTC (permalink / raw)


"Pascal Obry" <pascal_obry@csi.com> writes:

> Well I don't know what is a fast or slow language !!

I do.  A fast language is one for which it is feasible to build
compilers that generate fast code.  A slow language is one for which
that is not feasible.

Also I prefer to put the burden of proof on the language advocates --
that is, a language should be considered "slow" until proven "fast" by
the existence of at least one good production-quality compiler.

By this definition, Smalltalk, for example, is slow -- I've never seen a
Smalltalk compiler that can generate fast code.  Furthermore, it seems
impossible, without doing all code generation at link time, which I
claim is not feasible in many cases.

I don't know whether Fortran is faster than Ada at matrix multiplies,
but it does seem like a meaningful question to ask.  If you measured
lots of compilers, you could learn something useful.

- Bob




^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Matrix Multiplication
  1999-12-14  0:00 William Dale
                   ` (2 preceding siblings ...)
  1999-12-15  0:00 ` Greg Lindahl
@ 1999-12-15  0:00 ` Ted Dennison
  1999-12-15  0:00   ` William B. Clodius
  1999-12-15  0:00 ` E. Robert Tisdale
       [not found] ` <01bf4708$99ef98f0$022a6282@dieppe>
  5 siblings, 1 reply; 81+ messages in thread
From: Ted Dennison @ 1999-12-15  0:00 UTC (permalink / raw)


In article <385699B5.59C14D03@lmco.com>,
  fud@fud.com wrote:
> OK,
> I was challenged by one of my co-workers - a control and guidance type
> - not a real software engineer.
>
> His claim  : "Fortran is faster doing floating point multiplication
> than Ada"

I hope you meant "matrix multiplication", as in the subject. This
statement makes absolutely no sense.

> I could not get any other specifications such as hardware, particular
> compiler, version, vendor, special math libraries or any other
> equivocations.  Just he
> claims the above in all cases.

So he admits that he basicly doesn't know what he's talking about. Nice.

A couple of years ago I took a graduate course in compiler optimizations
for parallel architectures. As you can imagine, the number crunchers
that use high-end parallel machines want the ability to use their
parallel capabilites to speed up their code, without having to rewite it
all in another language. The language in quesion is almost always
Fortran (from what I can tell, for historical reasons).

Anyway, this need has driven a lot of research into optimization of
Fortran code on parallel architectures. It has also created a new
variant of the language : HPF (High-Performance Fortran). HPF has 3 new
looping constructs that correspond to parallel operations. The big issue
for optimizing these kinds of things is figuring out the lifetimes of
objects and values. The teacher of course was a big Fortran fan, so I
didn't mention Ada (I wanted a good grade. I'm not that stupid). But all
during the class, I'd keep raising my hand and saying "wouldn't it be
easier to perform this optimization if you had {insert Ada rule here}".
Generally, she'd agree.

In fact, I think it would be quite possible to create an Ada compiler
that could optimize matrix operations *better* than the best Fortran
compiler, assuming you also devise loop pragmas to match the HPF
parallel looping constructs. You certianly could do it better with the
same amount of effort.

But "effort" is the key. How much effort a compiler vendor puts into
optimization doesn't directly have anything to do with the language the
compiler recognizes. It has a lot to do with the compiler's maturity and
its target market. However, I wouldn't be shocked to discover that your
average Fortran developer cares immensely more about the efficiency of
looping matrix operations than your average Ada developer.

In any case, claiming "compiled language X is faster than compiled
language Y" is just flat-out wrong.

--
T.E.D.


Sent via Deja.com http://www.deja.com/
Before you buy.




^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Matrix Multiplication
  1999-12-15  0:00       ` Marin D. Condic
  1999-12-15  0:00         ` Gautier
@ 1999-12-15  0:00         ` Ted Dennison
  1999-12-15  0:00           ` Gautier
  1 sibling, 1 reply; 81+ messages in thread
From: Ted Dennison @ 1999-12-15  0:00 UTC (permalink / raw)


In article <3857D640.C1991F0C@quadruscorp.com>,
  "Marin D. Condic" <mcondic-nospam@quadruscorp.com> wrote:

> language and the quality of a specific compiler. So the compiler
> writers have a special responsibility to put out good quality products
> besmirch the language in the minds of many! :-)

Hmmm. I think that's my cue to point out that ObjectAda has no loop
optimization options whatsoever. :-)

(nudge, nudge)

--
T.E.D.


Sent via Deja.com http://www.deja.com/
Before you buy.




^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Matrix Multiplication
  1999-12-15  0:00   ` William B. Clodius
@ 1999-12-15  0:00     ` Ted Dennison
  1999-12-15  0:00       ` William B. Clodius
  0 siblings, 1 reply; 81+ messages in thread
From: Ted Dennison @ 1999-12-15  0:00 UTC (permalink / raw)


In article <3857E220.26AE90BB@lanl.gov>,
  "William B. Clodius" <wclodius@lanl.gov> wrote:
> numerics question where the types are determined statically and
> generally map to the equivalent hardware operations. In this case a
> common performance difference between languages is the extent to which
> the language can rely on local analyses for its optimizations, which
> in turn mostly depends on the aliasing properties of entities in the
> language, which in turn is most strongly influenced by the properties
> of the arguments to procedures. In this case Ada has relatively loose
> rules compared to some other languages, e.g., C/C++, but stronger
> rules than Fortran, so that Ada is in principle harder to optimize
> than Fortran. However, the difference between Fortran and Ada in this

That is a fairly accurate principle. So why did I end up finding so many
"Ada could do better situations"? You may ask. Well there are two issues
you missed in the above:
  1)  In general, the more information a compiler has about the possible
values of objects, the better it can optimize. That gives strongly typed
languages an advantage. You may turn off the runtime checking, but the
compiler can still assume the values will never exceed that range for
code generation purposes.

For instance, a big issue is how much information about the possible
values of the iteration variables the compiler has access to. The
ability of an Ada developer to constrain his index types using subtypes
potentially gives the compiler a trememdous boost here.

  2)  You ignore non-local optimization issues. That's fair enough. But
in their quest for more speed, researchers are delving into non-local
optimizations. Ada's strong typing drasticly reduces the possible access
paths the optimizer has to worry about for a location or value. Also, a
lot of the language was written to allow compilers leeway in optimizing.
For example, users aren't allowed to make assumptions about the
parameter passing method used for subroutines except in certain
situations. Generally the more information and freedom a compiler has at
its disposal, the easier it will be for it to optimize.

> regard is small enough that it might be washed out in the noise of
> variations in efficiency of compiler implementations.

That's the big point, and we are in complete agreement here.

--
T.E.D.


Sent via Deja.com http://www.deja.com/
Before you buy.




^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Matrix Multiplication
  1999-12-15  0:00       ` Marin D. Condic
@ 1999-12-15  0:00         ` Gautier
  1999-12-16  0:00           ` Marin D. Condic
  1999-12-15  0:00         ` Ted Dennison
  1 sibling, 1 reply; 81+ messages in thread
From: Gautier @ 1999-12-15  0:00 UTC (permalink / raw)


> It would seem that if the compiler were smart, it would compute I and J
> once and find the position once, then reuse it throughout the loop.

GNAT does reuse adresses. A bright proof in c.l.a (by Samuel Tardieu):

<< [a(b(c,d+e(f,g)).h(i,j)) :=  a(b(c,d+e(f,g)).h(i,j)) + 1;]

Where did you get the impression that the optimizer would miss this? 
 For example, using GNAT, the following expression generates: (ix86 code) 
                           
 t__pXb: movl t__jXb,%edx                             |
         decl %edx                                    |
         movl t__iXb,%eax                             |
         decl %eax                                    |
         leal (%eax,%eax,4),%eax                      |
         sall $3,%eax                                 |
         leal (%eax,%edx,4),%edx                      |
         movl t__gXb,%ecx                             |
         decl %ecx                                    | Address computation
         movl t__fXb,%eax                             |
         decl %eax                                    |
         leal (%eax,%eax,4),%eax                      |
         sall $3,%eax                                 |
         movl t__eXb(%eax,%ecx,4),%eax                |
         addl t__dXb,%eax                             |
         imull $400,%eax,%eax                         |
         leal -400(%edx,%eax),%eax                    |
         imull $4000,t__cXb,%edx                      |
         movl t__bXb-4000(%eax,%edx),%eax             |
         decl %eax                                    |
         incl t__aXb(,%eax,4)            <--- Increment done here!
         ret
                           
 The code used to generate this was: (-O3 -fomit-frame-pointer -gnatp) 
                           
 package T is
   pragma Elaborate_Body;
 end T;
                           
 package body T is
   pragma Warnings (Off); -- Uninitialized variables
                           
   type Two_Ints is array (Integer range <>, Integer range <>) of Integer; 
                           
   type Rec is record
     H : Two_Ints (1 .. 10, 1 .. 10);
   end record;
                           
   type Two_Recs is array (Integer range <>, Integer range <>) of Rec; 
                           
   A : array (1 .. 10) of Integer;
   B : Two_Recs (1 .. 10, 1 .. 10);
   C : Integer;
   D : Integer;
   E : Two_Ints (1 .. 10, 1 .. 10);
   F : Integer;
   G : Integer;
   I : Integer;
   J : Integer;
                           
   procedure P is
     begin
       a(b(c,d+e(f,g)).h(i,j)) :=  a(b(c,d+e(f,g)).h(i,j)) + 1;
     end P;
                           
 end T;

>>

But there are *rare* cases where such "invariants" are not provable as such,
and a renames helps, even with GNAT. On the other side, some Ada 83 compilers
do not seem to optimize these things (Compaq Ada, Alsys).

-- 
Gautier

_____\\________________\_______\_________
http://members.xoom.com/gdemont/gsoft.htm




^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Matrix Multiplication
  1999-12-15  0:00         ` Ted Dennison
@ 1999-12-15  0:00           ` Gautier
  1999-12-15  0:00             ` Tucker Taft
                               ` (2 more replies)
  0 siblings, 3 replies; 81+ messages in thread
From: Gautier @ 1999-12-15  0:00 UTC (permalink / raw)


> Hmmm. I think that's my cue to point out that ObjectAda has no loop
> optimization options whatsoever. :-)

Is there optimization at all in OA ?

I didn't find the switches for that...
Maybe a hidden panel in the interface - a bit like...
  http://lsewww.epfl.ch/wiesmann/jokes/errors/word.gif
(just a bit)

-- 
Gautier

_____\\________________\_______\
http://members.xoom.com/gdemont/




^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Matrix Multiplication
  1999-12-15  0:00           ` Gautier
@ 1999-12-15  0:00             ` Tucker Taft
  1999-12-16  0:00             ` Ted Dennison
  1999-12-16  0:00             ` Ted Dennison
  2 siblings, 0 replies; 81+ messages in thread
From: Tucker Taft @ 1999-12-15  0:00 UTC (permalink / raw)


Gautier wrote:
> 
> > Hmmm. I think that's my cue to point out that ObjectAda has no loop
> > optimization options whatsoever. :-)
> 
> Is there optimization at all in OA ?

I certainly presume so, since our front end generates
information specifically for the ObjectAda optimizer.
Try something like "-O3" or "-opt".  Usually if you
run the compiler with no arguments it will dump out
a list of available switches.

One possibility is that the optimizer is only in some of their
targets.  Historically, the x86 back end has been pretty different
from the other "risc-ish" backends.  I have some memory that their
x86 backend did not do as many register optimizations as the
risc-ish backends (which is not terribly surprising given the
rather un-general register set on the x86 -- note: not trying to
start a CISC vs. RISC war here ;-).

> I didn't find the switches for that...
> Maybe a hidden panel in the interface - a bit like...
>   http://lsewww.epfl.ch/wiesmann/jokes/errors/word.gif
> (just a bit)
> 
> --
> Gautier

-- 
-Tucker Taft   stt@averstar.com   http://www.averstar.com/~stt/
Technical Director, Distributed IT Solutions  (www.averstar.com/tools)
AverStar (formerly Intermetrics, Inc.)   Burlington, MA  USA




^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Matrix Multiplication
  1999-12-15  0:00 Carlisle, Martin
@ 1999-12-15  0:00 ` Mario Klebsch
  1999-12-19  0:00   ` Robert Dewar
  1999-12-19  0:00 ` Robert Dewar
  1999-12-19  0:00 ` Robert Dewar
  2 siblings, 1 reply; 81+ messages in thread
From: Mario Klebsch @ 1999-12-15  0:00 UTC (permalink / raw)


"Carlisle, Martin" <Martin.Carlisle@usafa.af.mil> writes:

>The idea that matrix multiplication would always be inlined seems absurd to
>me.  The simple implementation has O(n^3) running time.

What about compiling for a CPU, that does have an instruction for
matrix multiplication? It seems absurd to me, not to use that
instruction, if it is available.

73, Mario
-- 
Mario Klebsch						mario@klebsch.de




^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Matrix Multiplication
  1999-12-14  0:00 ` Gautier
       [not found]   ` <5l1f5s4kck891a2s6o8bhvkirm4q79hm6c@4ax.com>
@ 1999-12-15  0:00   ` Gautier
  1 sibling, 0 replies; 81+ messages in thread
From: Gautier @ 1999-12-15  0:00 UTC (permalink / raw)


> NB: for a fair comparison don't forget to suppress all the standard Ada
> checkings at compile-time...

To avoid misunderstanding, I mean by that: suppress, when compiling,
all the standard Ada run-time checks. ;-) G.




^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Matrix Multiplication
  1999-12-14  0:00 William Dale
                   ` (3 preceding siblings ...)
  1999-12-15  0:00 ` Ted Dennison
@ 1999-12-15  0:00 ` E. Robert Tisdale
       [not found]   ` <3856FD3F.8291A71C@ucar.edu>
       [not found] ` <01bf4708$99ef98f0$022a6282@dieppe>
  5 siblings, 1 reply; 81+ messages in thread
From: E. Robert Tisdale @ 1999-12-15  0:00 UTC (permalink / raw)


William Dale wrote:

> I was challenged by one of my co-workers -
> a control and guidance type - not a real software engineer.
>
> His claim:
> "Fortran is faster doing floating point multiplication than Ada"
>
> I could not get any other specifications such as hardware,
> particular compiler, version, vendor, special math libraries
> or any other equivocations.  Just he claims the above in all cases.
>
> So could I get some help getting times
> for a Matrix inversion on a 50X50 floating point matrix in both languages.
> Anyone already done these types of tests?
> Any suggestions on special libraries that he may not know about
> to do such things in Ada. Obviously he has the Fortan side nailed.
>
> I know it is silly but this is the kind of FUD
> that gets thown around all the time.
> Either language could win - it depends on many of the above issues.

Performance, in general, has nothing to do with the programming language.
There may be considerable differences between optimizing compilers.
Fortran compilers are typically better at optimizing numerical codes
than other compilers because that is what Fortran programmers do.
Just get a copy of your co-workers Fortran source code
and convert it into an Ada program
then compile his code with the GNU Fortran compiler g77
and compile your code with the GNU Ada compiler gnat.
Both compilers should emit exactly the same code
unless you do something wrong.

Hope this helps,
E. Robert Tisdale <edwin@netwood.net>





^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Matrix Multiplication
       [not found]   ` <3856FD3F.8291A71C@ucar.edu>
@ 1999-12-15  0:00     ` E. Robert Tisdale
  1999-12-14  0:00       ` Richard Maine
  0 siblings, 1 reply; 81+ messages in thread
From: E. Robert Tisdale @ 1999-12-15  0:00 UTC (permalink / raw)


Dennis Shea wrote:

> A few minor comments: the fortran standard is f90 [f95].
> f90 has numerous array processing
> and matrix manipulation intrinsic procedures.
> One of the intrinsics is "matmul"  ===> z = matmul(x,y).
> Under Ideal conditions a compiler writer targeting a particular machine
> could take advantage of the machine's architecture and
> it would be difficult to beat "matmul"
> unless ADA [of which I know little] has an matmul equivalent.
> Unfortunately, there is no GNU f95 compiler [yet].

That's even easier.  The Ada version could call the f90 intrinsic too.
All you would need to do is link the f90 library which contains matmul.

E. Robert Tisdale <edwin@netwood.net>





^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Matrix Multiplication
  1999-12-14  0:00 William Dale
  1999-12-14  0:00 ` Gautier
  1999-12-14  0:00 ` David C. Hoos, Sr.
@ 1999-12-15  0:00 ` Greg Lindahl
  1999-12-15  0:00   ` Preben Randhol
  1999-12-15  0:00 ` Ted Dennison
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 81+ messages in thread
From: Greg Lindahl @ 1999-12-15  0:00 UTC (permalink / raw)


William Dale <william.dale.jr@lmco.com> writes:

> So could I get some help getting times for a Matrix inversion on a 50X50
> flotaing point matrix in both languages.

This is bad for 2 reasons. The first is that people do operations like
that using subroutine libraries written in assembly, no matter what
language you are using. The second is that matrix inversion usually is
a bad idea, and you should use LU decomposition instead.

-- g





^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Matrix Multiplication
       [not found]   ` <5l1f5s4kck891a2s6o8bhvkirm4q79hm6c@4ax.com>
@ 1999-12-15  0:00     ` Gautier
  1999-12-15  0:00       ` Marin D. Condic
  0 siblings, 1 reply; 81+ messages in thread
From: Gautier @ 1999-12-15  0:00 UTC (permalink / raw)


> Intrigued about the 'renames' bit.  I thought the renames was just a
> compiler overhead and had no run-time effect at all.

IIRC the renames takes `aliases' the its target its present state.
E.g.
  declare
    x: thing renames complicated(i,j).k;
  begin
    -- i,j could change, doesn't affect x

The ARM gurus will comment better...

Anyway it is a way to obtain a direct pointer to `complicated(i,j).k',
not just a syntaxic alias. On my tests with sparse matrices,
`renames' in the matrix-vector multiplication simply *halves* the computation
time of the whole BiCGStab algorithm (compiled on Compaq (DEC) Ada)!

The `renames' are

  procedure Mult( A: in out CRS_matrix; u: vector; w: in out vector ) is
   deb, fin, jaj: index;
   ui, wi, wi_sum: real;
   rows: constant index:= A.rows;
   val: vector renames A.val;
   col_ind: index_array renames A.col_ind;
   row_ptr: index_array renames A.row_ptr;

   begin

      if  not A.symmetric  then
         -- *** la matrice est memorisee sous forme non symetrique

         for i in 1..rows loop
            deb := row_ptr(i);
            fin := row_ptr(i + 1) - 1;
            wi_sum := 0.0;
            for j in deb .. fin loop
               wi_sum := wi_sum + val(j) * u(col_ind(j));
            end loop;
            w(i) := wi_sum;
         end loop;
...

GNAT optimizer (-O2) is smarter for memorizing these addresses,
but sometimes one has to help it too with a small `renames'
(testings for textured 3D)...
 
-- 
Gautier

_____\\________________\_______\_________
http://members.xoom.com/gdemont/gsoft.htm




^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Matrix Multiplication
  1999-12-15  0:00 ` Greg Lindahl
@ 1999-12-15  0:00   ` Preben Randhol
  0 siblings, 0 replies; 81+ messages in thread
From: Preben Randhol @ 1999-12-15  0:00 UTC (permalink / raw)


lindahl@pbm.com (Greg Lindahl) writes:

| language you are using. The second is that matrix inversion usually is
| a bad idea, and you should use LU decomposition instead.

I agree with you there. LU with pivoting that is, otherwise it could be
dubious what result you get due to round-off errors.

-- 
Preben Randhol -- [randhol@pvv.org] -- [http://www.pvv.org/~randhol/]     
         "Det eneste trygge stedet i verden er inne i en fortelling." 
                                                      -- Athol Fugard




^ permalink raw reply	[flat|nested] 81+ messages in thread

* RE: Matrix Multiplication
@ 1999-12-15  0:00 Carlisle, Martin
  1999-12-15  0:00 ` Mario Klebsch
                   ` (2 more replies)
  0 siblings, 3 replies; 81+ messages in thread
From: Carlisle, Martin @ 1999-12-15  0:00 UTC (permalink / raw)
  To: comp.lang.ada

The idea that matrix multiplication would always be inlined seems absurd to
me.  The simple implementation has O(n^3) running time.  Any enhancement
requires a fair amount of code, which I surely wouldn't want replicated
every time I did a matrix multiply.  The overhead of a function call, in
this case, would be so small as to be negligible.

--Martin

----------------------------------------------
Martin C. Carlisle, PhD
Assistant Professor of Computer Science
US Air Force Academy
DISCLAIMER:  This message represents the author's opinions, and not
necessarily those of the US Air Force Academy or the US Air Force. 

-----Original Message-----
From: Richard Maine [mailto:maine@qnet.com]
Sent: Tuesday, December 14, 1999 8:25 PM
To: comp.lang.ada@ada.eu.org
Subject: Re: Matrix Multiplication


"E. Robert Tisdale" <edwin@netwood.net> writes:

> That's even easier.  The Ada version could call the f90 intrinsic too.
> All you would need to do is link the f90 library which contains matmul.

I will not get into the language comparison or benchmarking aspects of
this thread.

I just note that "linking the f90 library that contains matmul" isn't
necessarily that straightforward.  Matmul is an intrinsic.  It is not
at all given  that there even *IS* a library that contains it; a compiler
is quite free to always do it inline.  And even if much of the work is
in a library routine, the interface to them isn't necessarily known
outside of the compiler.

Intrinsics are basically part of the compiler internals.  They *MAY* be
implemented with callable library routines, but there is no guarantee of
that.
And it certainly isn't a portable way to call them.

-- 
Richard Maine
maine@qnet.com

_______________________________________________
comp.lang.ada mailing list
comp.lang.ada@ada.eu.org
http://ada.eu.org/cgi-bin/mailman/listinfo/comp.lang.ada







^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Matrix Multiplication
  1999-12-15  0:00     ` Gautier
@ 1999-12-15  0:00       ` Marin D. Condic
  1999-12-15  0:00         ` Gautier
  1999-12-15  0:00         ` Ted Dennison
  0 siblings, 2 replies; 81+ messages in thread
From: Marin D. Condic @ 1999-12-15  0:00 UTC (permalink / raw)


Gautier wrote:
> 
> > Intrigued about the 'renames' bit.  I thought the renames was just a
> > compiler overhead and had no run-time effect at all.
> 
> IIRC the renames takes `aliases' the its target its present state.
> E.g.
>   declare
>     x: thing renames complicated(i,j).k;
>   begin
>     -- i,j could change, doesn't affect x
> 
> The ARM gurus will comment better...

It would seem that if the compiler were smart, it would compute I and J
once and find the position once, then reuse it throughout the loop. (I'm
presuming this would be done in nested "for" loops.) I'm not an expert
in compiler theory, but I recall seeing output from more than one
compiler/language doing this sort of thing. (loop invariant code? code
hoisting? some terminology I've long since archived to tape.) So while
the "renames" may help the compiler along in this respect, I think that
it could/should get there without the help.

Of course, I've seen lots of things where a compiler "ought" to do
something, but won't unless you trick it into doing it with syntactic
deceptions. Thats more a comment on the quality of the compiler. In this
case, I'd think that Fortran could produce the optimization without a
"renames" so Ada ought to be able to do the same.

The original question in this thread had to do with "Ada can't do
floating point math as fast as Fortran" - which is slightly different
from Matrix Math in Ada/Fortran. Maybe someone can correct me if I'm
wrong, (or just plain call me an ignoramus! :-) but I don't see much
syntactic or semantic difference between Ada arithmetic and Fortran
arithmetic. For that matter, there isn't much apparent difference in
array processing. So once you disable the runtime checks, (O.K., maybe
that's part of the semantics) the differences don't amount to a warm
bucket of spit. Any performance difference should be attributable to the
quality of the compilers in question. (This is probably the most often
misunderstood thing about language efficiency. Most of the "unwashed
masses" seem to be incapable of distinguishing between the quality of a
language and the quality of a specific compiler. So the compiler writers
have a special responsibility to put out good quality products lest they
besmirch the language in the minds of many! :-)

MDC
-- 
=============================================================
Marin David Condic   - Quadrus Corporation -   1.800.555.3393
1015-116 Atlantic Boulevard, Atlantic Beach, FL 32233
http://www.quadruscorp.com/

Visit my web site at:  http://www.mcondic.com/

"Capitalism without failure is like religion without sin." 
        --  Allan Meltzer, Economist 
=============================================================




^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Matrix Multiplication
  1999-12-15  0:00   ` Robert A Duff
@ 1999-12-15  0:00     ` Marin D. Condic
  1999-12-16  0:00     ` Pascal Obry
  1999-12-16  0:00     ` Dieter Britz
  2 siblings, 0 replies; 81+ messages in thread
From: Marin D. Condic @ 1999-12-15  0:00 UTC (permalink / raw)


Robert A Duff wrote:
> 
> "Pascal Obry" <pascal_obry@csi.com> writes:
> 
> > Well I don't know what is a fast or slow language !!
> 
> I do.  A fast language is one for which it is feasible to build
> compilers that generate fast code.  A slow language is one for which
> that is not feasible.

If, for example, Ada required runtime range checks which were illegal to
disable or no mechanism was available to remove them, then you'd have a
"slow language" - right? Or if the definition of "floating point" in Ada
were such that it was impossible to map to machine hardware (some
bizarre requirement to use packed decimal representation, or to use so
many bits that nobody's hardware would support it, or something
similar.) then the language itself would be slow.

There are lots of ways I can think of to make a language slow. Maybe we
need to have a "slowest language construct" contest? Everyone submits
their favorite "slow" feature (From Ada or any other language) and we
challenge the compiler writers to find the fastest possible
implementation? What does the winner get - besides the extra work of
implementing the optimized feature? :-)

MDC
-- 
=============================================================
Marin David Condic   - Quadrus Corporation -   1.800.555.3393
1015-116 Atlantic Boulevard, Atlantic Beach, FL 32233
http://www.quadruscorp.com/

Visit my web site at:  http://www.mcondic.com/

"Capitalism without failure is like religion without sin." 
        --  Allan Meltzer, Economist 
=============================================================




^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Matrix Multiplication
  1999-12-15  0:00 ` Ted Dennison
@ 1999-12-15  0:00   ` William B. Clodius
  1999-12-15  0:00     ` Ted Dennison
  0 siblings, 1 reply; 81+ messages in thread
From: William B. Clodius @ 1999-12-15  0:00 UTC (permalink / raw)


There are a number of aspects of a language design that influence the
performance of language implementations. By far the biggest influence is
the amount of dynamism the language allows: if the types associated with
the expressions of a statement can change during multiple passes over a
statement, then I believe that full optimization is equivalent to the
halting problem. At least some scripting languages suffer from this
problem to an extreme extent, while dynamic dispatch in statically
compiled object orientation is a solution that attempts to minimize, but
not eliminate, this problem.

However this problem that does not generally pertain to the Fortran/Ada
numerics question where the types are determined statically and
generally map to the equivalent hardware operations. In this case a
common performance difference between languages is the extent to which
the language can rely on local analyses for its optimizations, which in
turn mostly depends on the aliasing properties of entities in the
language, which in turn is most strongly influenced by the properties of
the arguments to procedures. In this case Ada has relatively loose rules
compared to some other languages, e.g., C/C++, but stronger rules than
Fortran, so that Ada is in principle harder to optimize than Fortran.
However, the difference between Fortran and Ada in this regard is small
enough that it might be washed out in the noise of variations in
efficiency of compiler implementations.

Note that aliasing properties mostly affect the efficiency of "library"
routines. A "simple" coding of matmul in a "main" procedure is often
amenable to local analysis in any statically compiled language.




^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Matrix Multiplication
  1999-12-15  0:00     ` Ted Dennison
@ 1999-12-15  0:00       ` William B. Clodius
  1999-12-16  0:00         ` Robert A Duff
  0 siblings, 1 reply; 81+ messages in thread
From: William B. Clodius @ 1999-12-15  0:00 UTC (permalink / raw)




Ted Dennison wrote:
> 
> In article <3857E220.26AE90BB@lanl.gov>,
>   "William B. Clodius" <wclodius@lanl.gov> wrote:
> > numerics question where the types are determined statically and
> > generally map to the equivalent hardware operations. In this case a
> > common performance difference between languages is the extent to which
> > the language can rely on local analyses for its optimizations, which
> > in turn mostly depends on the aliasing properties of entities in the
> > language, which in turn is most strongly influenced by the properties
> > of the arguments to procedures. In this case Ada has relatively loose
> > rules compared to some other languages, e.g., C/C++, but stronger
> > rules than Fortran, so that Ada is in principle harder to optimize
> > than Fortran. However, the difference between Fortran and Ada in this
> 
> That is a fairly accurate principle. So why did I end up finding so many
> "Ada could do better situations"? You may ask. Well there are two issues
> you missed in the above:
>   1)  In general, the more information a compiler has about the possible
> values of objects, the better it can optimize. That gives strongly typed
> languages an advantage. You may turn off the runtime checking, but the
> compiler can still assume the values will never exceed that range for
> code generation purposes.
> 
> For instance, a big issue is how much information about the possible
> values of the iteration variables the compiler has access to. The
> ability of an Ada developer to constrain his index types using subtypes
> potentially gives the compiler a trememdous boost here.

Yes it is a win, but I don't know how big a win it is compared to the
requirement in both Ada and Fortran that indexing outside the visible
indices of the array is illegal. I would expect that in the vast
majority of cases, both assumptions are equivalent. Of course what I
expect might not be what I would find.

> 
>   2)  You ignore non-local optimization issues. That's fair enough. But
> in their quest for more speed, researchers are delving into non-local
> optimizations. Ada's strong typing drasticly reduces the possible access
> paths the optimizer has to worry about for a location or value. Also, a
> lot of the language was written to allow compilers leeway in optimizing.
> For example, users aren't allowed to make assumptions about the
> parameter passing method used for subroutines except in certain
> situations. Generally the more information and freedom a compiler has at
> its disposal, the easier it will be for it to optimize.

I did not discuss non-local optimization issues, but one of your
arguments in favor of such optimizations for Ada, "users aren't allowed
to make assumptions about the parameter passing method used for
subroutines except in certain situations" is even more an argument in
favor of Fortran, which allows even fewer assumptions. That is precisely
what I meant by Ada having even stronger aliasing rules than Fortran for
arguments to its procedures. If I remember correctly, Ada requires
either copy-in/copy-out or call by reference semantics by the compiler
(without specifying the choice made) in most contexts, while Fortran
requires that code allow either copy-in/copy-out or call by reference
implementation, but does not require that the actual implementation be
consistent with either semantics, i.e. at arbitrary points in the
procedure arguments can be copied-in/copied-out without synchronizing
with other copy-ins/copy-outs. The other argument about strong typing is
not a strong argument against Fortran, as by almost any definition
Fortran is strongly typed (albeit some aspects of its typing are error
prone, e.g., implicit typing).

A better argument might be that Ada has better dataflow control
constructs than Fortran 77 and 66 (and in some aspects better than
Fortran 90, though WHERE and the more recent FORALL have their
advantages), but I don't know how strong an argument that is given that
reputedly most compilers use intermediate forms that replace those
constructs by their unstructured equivalents.

One reason I did not bring up non-local optimization is that for some
languages important cases of such optimizations are known to be NP hard
in the general case, e.g., C argument aliasing in the absence of C99's
restrict. I do not know how pertinent that is for Ada vs Fortran.

An additional problem in this discussion is that some compilers may have
a switch that in effect implements: this code has additional semantics
not specified by the standard. 

> <snip>




^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Matrix Multiplication
  1999-12-15  0:00           ` Gautier
  1999-12-15  0:00             ` Tucker Taft
  1999-12-16  0:00             ` Ted Dennison
@ 1999-12-16  0:00             ` Ted Dennison
  2 siblings, 0 replies; 81+ messages in thread
From: Ted Dennison @ 1999-12-16  0:00 UTC (permalink / raw)


In article <38581366.9078A2E8@maths.unine.ch>,
  Gautier <Gautier.deMontmollin@maths.unine.ch> wrote:
> > Hmmm. I think that's my cue to point out that ObjectAda has no loop
> > optimization options whatsoever. :-)
>
> Is there optimization at all in OA ?
>
> I didn't find the switches for that...
> Maybe a hidden panel in the interface - a bit like...
>   http://lsewww.epfl.ch/wiesmann/jokes/errors/word.gif
> (just a bit)
>
> --
> Gautier
>
> _____\\________________\_______\
> http://members.xoom.com/gdemont/
>

--
T.E.D.


Sent via Deja.com http://www.deja.com/
Before you buy.




^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Matrix Multiplication
  1999-12-15  0:00           ` Gautier
  1999-12-15  0:00             ` Tucker Taft
@ 1999-12-16  0:00             ` Ted Dennison
  1999-12-16  0:00             ` Ted Dennison
  2 siblings, 0 replies; 81+ messages in thread
From: Ted Dennison @ 1999-12-16  0:00 UTC (permalink / raw)


In article <38581366.9078A2E8@maths.unine.ch>,
  Gautier <Gautier.deMontmollin@maths.unine.ch> wrote:
> Is there optimization at all in OA ?
>
> I didn't find the switches for that...

In the Win32 OA (as of 6 months ago when last I checked) there are no
optimization options available. That does not mean that no optimization
is going on. But it does appear to mean that not much optimization is
going on.

I suspect the situation is quite different with their embedded targets.

--
T.E.D.


Sent via Deja.com http://www.deja.com/
Before you buy.




^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Matrix Multiplication
  1999-12-15  0:00       ` William B. Clodius
@ 1999-12-16  0:00         ` Robert A Duff
  1999-12-16  0:00           ` William B. Clodius
  0 siblings, 1 reply; 81+ messages in thread
From: Robert A Duff @ 1999-12-16  0:00 UTC (permalink / raw)


"William B. Clodius" <wclodius@lanl.gov> writes:

>...If I remember correctly, Ada requires
> either copy-in/copy-out or call by reference semantics by the compiler
> (without specifying the choice made) in most contexts, ...

I believe RM-6.2(12) allows substantially more freedom to the compiler
than that.  But you are correct that it's not *quite* as much freedom as
Fortran allows.  Pretty close, though, I think.

>...while Fortran
> requires that code allow either copy-in/copy-out or call by reference
> implementation, but does not require that the actual implementation be
> consistent with either semantics, i.e. at arbitrary points in the
> procedure arguments can be copied-in/copied-out without synchronizing
> with other copy-ins/copy-outs. ...

Anyway, there's an important non-technical issue: How much money are
people willing to expend on making optimizing compilers.

- Bob




^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Matrix Multiplication
  1999-12-16  0:00     ` Pascal Obry
@ 1999-12-16  0:00       ` Brian Rogoff
  1999-12-16  0:00       ` Greg Martin
  1 sibling, 0 replies; 81+ messages in thread
From: Brian Rogoff @ 1999-12-16  0:00 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: TEXT/PLAIN; charset=X-UNKNOWN, Size: 2131 bytes --]

On Thu, 16 Dec 1999, Pascal Obry wrote:
> Robert A Duff <bobduff@world.std.com> a écrit dans le message :
> wcchfhkxjeg.fsf@world.std.com...
> > I do.  A fast language is one for which it is feasible to build
> > compilers that generate fast code.  A slow language is one for which
> > that is not feasible.
> 
> I pretty well understand that, but we are then talking about implementation
> in a compiler.
> 
> > Also I prefer to put the burden of proof on the language advocates --
> > that is, a language should be considered "slow" until proven "fast" by
> > the existence of at least one good production-quality compiler.
> >
> > By this definition, Smalltalk, for example, is slow -- I've never seen a
> > Smalltalk compiler that can generate fast code.  Furthermore, it seems

I like Bob Duff's definition, though it probably needs a bit more
elaboration. A few years ago Richard O'Keefe posted some interesting 
microbenchmarks to comp.lang.ada comparing Pascal, Scheme, and Ada 
programs for two dimensional integration using downward funargs, with the 
(GNAT) Ada code using generics to simulate this. The Scheme code was the
fastest. The Scheme compiler was an aggressive whole-program optimizer 
called Stalin. Despite the success of this microbenchmark, and Stalin, I 
suspect that Scheme would be classified as a "slow" language, like
Smalltalk, this compiler not proving itself "production quality" yet.  
So what's the litmus test for being production quality?

> Ok, we have never seen one, but is it really impossible ? I do remember
> something about IBM creating a Smalltalk compiler, is that true ? How
> fast was it ?

You can create compilers for any language. However, Smalltalk keeps a lot
of info around at runtime, so for example, when doing integer arithmetic 
you get transparent overflow into bignums if you need them. That costs!

If your Smalltalk were to make each element of a float or complex array a 
self identifying (tagged) object then I guarantee you Smalltalk numerical 
linear algebra will run much slower than Fortran or Ada!

-- Brian







^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Matrix Multiplication
  1999-12-16  0:00 Matrix Multiplication Carlisle, Martin
@ 1999-12-16  0:00 ` Howard W. LUDWIG
  0 siblings, 0 replies; 81+ messages in thread
From: Howard W. LUDWIG @ 1999-12-16  0:00 UTC (permalink / raw)


Well, I've worked on a processor which has a rather complex instruction set,
although it did not include matrix multiplication as an instruction.  My 
favorite instruction was the one that allowed me to do FFTs on a power-of-2,
up to 64, number of complex data items.  The Ada compiler vendor had lots of
fun with this machine--they created a package of Machine-Specific Procedures
which we could call and would be compiled/translated into the corresponding
assembly instruction inline.  About 90 % of the code looked like assembly 
language with Ada syntax (semicolon at the end of the statement, parentheses
around the set of operands, ...).

Now you might ask why we did such a thing--why not just write in assembly 
language in the first place?  Let's just say--overzealous bureaucrats 
enforcing the "single language policy" (also commonly known as "the Ada 
Mandate").

Howard W. LUDWIG

"Carlisle, Martin" wrote:
> 
> Suppose you have such a machine (wow! talk about complex instruction sets!).
> I then grant that any good Fortran compiler would use it.  However, it also
> follows that any Ada compiler could create a small function to use it (e.g.
> using Machine code insertions), and then pragma Inline that function.
> 
> --Martin




^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Matrix Multiplication
  1999-12-16  0:00     ` Pascal Obry
  1999-12-16  0:00       ` Brian Rogoff
@ 1999-12-16  0:00       ` Greg Martin
  1 sibling, 0 replies; 81+ messages in thread
From: Greg Martin @ 1999-12-16  0:00 UTC (permalink / raw)


On Thu, 16 Dec 1999 09:32:45 +0100, "Pascal Obry" <p.obry@der.edf.fr>
wrote:


>Ok, we have never seen one, but is it really impossible ? I do remember
>something about IBM creating a Smalltalk compiler, is that true ? How
>fast was it ?
>
I know nothing about it but there is a Visual Age Smalltalk.
http://www-4.ibm.com/software/ad/smalltalk/
regards,
Greg Martin





^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Matrix Multiplication
  1999-12-15  0:00   ` Robert A Duff
  1999-12-15  0:00     ` Marin D. Condic
  1999-12-16  0:00     ` Pascal Obry
@ 1999-12-16  0:00     ` Dieter Britz
  2 siblings, 0 replies; 81+ messages in thread
From: Dieter Britz @ 1999-12-16  0:00 UTC (permalink / raw)


On Wed, 15 Dec 1999, Robert A Duff wrote:

> "Pascal Obry" <pascal_obry@csi.com> writes:
> 
> > Well I don't know what is a fast or slow language !!
> 
> I do.  A fast language is one for which it is feasible to build
> compilers that generate fast code.  A slow language is one for which
> that is not feasible.
> 
> Also I prefer to put the burden of proof on the language advocates --
> that is, a language should be considered "slow" until proven "fast" by
> the existence of at least one good production-quality compiler.
> 
> By this definition, Smalltalk, for example, is slow -- I've never seen a
> Smalltalk compiler that can generate fast code.  Furthermore, it seems
> impossible, without doing all code generation at link time, which I
> claim is not feasible in many cases.
> 
> I don't know whether Fortran is faster than Ada at matrix multiplies,
> but it does seem like a meaningful question to ask.  If you measured
> lots of compilers, you could learn something useful.

This must depend on the specific compiler. These have become better at
optimising code the last couple of decades. Years ago, I often needed
to shift large array sections, and (on a PDP11, under RT11) wrote
myself an assembler-code subroutine to do the shift; that turned out
to run about 100 times as fast as the equivalent Fortran code. I feel
sure that now, there would not be so much difference, if any (but I
don't have an assembler anymore). Later, I compared Pascal and Fortran 77
on a VAX machine, and Fortran was, on average, about twice as fast. It
might depend on what sort of operations you normally program.

-- Dieter Britz alias db@kemi.aau.dk;  http://www.kemi.aau.dk/~db
*** Echelon, bomb, sneakers, GRU: swamp the snoops with trivia! ***





^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Matrix Multiplication
  1999-12-15  0:00   ` Robert A Duff
  1999-12-15  0:00     ` Marin D. Condic
@ 1999-12-16  0:00     ` Pascal Obry
  1999-12-16  0:00       ` Brian Rogoff
  1999-12-16  0:00       ` Greg Martin
  1999-12-16  0:00     ` Dieter Britz
  2 siblings, 2 replies; 81+ messages in thread
From: Pascal Obry @ 1999-12-16  0:00 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 2067 bytes --]


Robert A Duff <bobduff@world.std.com> a �crit dans le message :
wcchfhkxjeg.fsf@world.std.com...
> I do.  A fast language is one for which it is feasible to build
> compilers that generate fast code.  A slow language is one for which
> that is not feasible.

I pretty well understand that, but we are then talking about implementation
in a compiler.

>
> Also I prefer to put the burden of proof on the language advocates --
> that is, a language should be considered "slow" until proven "fast" by
> the existence of at least one good production-quality compiler.
>
> By this definition, Smalltalk, for example, is slow -- I've never seen a
> Smalltalk compiler that can generate fast code.  Furthermore, it seems

Ok, we have never seen one, but is it really impossible ? I do remember
something about IBM creating a Smalltalk compiler, is that true ? How
fast was it ?

> impossible, without doing all code generation at link time, which I
> claim is not feasible in many cases.
>
> I don't know whether Fortran is faster than Ada at matrix multiplies,
> but it does seem like a meaningful question to ask.  If you measured

This still seems a stange question to me! Certainly the algorithm used in
the implementation as more to do with the speed than the language itself.

Pascal.

--

--|------------------------------------------------------------
--| Pascal Obry                               Team-Ada Member |
--|                                                           |
--| EDF-DER-IPN-SID- T T I                                    |
--|                       Intranet: http://cln46gb            |
--| Bureau N-023            e-mail: pascal.obry@edf.fr        |
--| 1 Av G�n�ral de Gaulle  voice : +33-1-47.65.50.91         |
--| 92141 Clamart CEDEX     fax   : +33-1-47.65.50.07         |
--| FRANCE                                                    |
--|------------------------------------------------------------
--|
--|   http://ourworld.compuserve.com/homepages/pascal_obry
--|
--|   "The best way to travel is by means of imagination"







^ permalink raw reply	[flat|nested] 81+ messages in thread

* RE: Matrix Multiplication
@ 1999-12-16  0:00 Carlisle, Martin
  1999-12-16  0:00 ` Howard W. LUDWIG
  0 siblings, 1 reply; 81+ messages in thread
From: Carlisle, Martin @ 1999-12-16  0:00 UTC (permalink / raw)
  To: comp.lang.ada

Suppose you have such a machine (wow! talk about complex instruction sets!).
I then grant that any good Fortran compiler would use it.  However, it also
follows that any Ada compiler could create a small function to use it (e.g.
using Machine code insertions), and then pragma Inline that function.

--Martin

-----Original Message-----
From: mario@klebsch.de [mailto:mario@klebsch.de]
Sent: Wednesday, December 15, 1999 11:01 AM
To: comp.lang.ada@ada.eu.org
Subject: Re: Matrix Multiplication


"Carlisle, Martin" <Martin.Carlisle@usafa.af.mil> writes:

>The idea that matrix multiplication would always be inlined seems absurd to
>me.  The simple implementation has O(n^3) running time.

What about compiling for a CPU, that does have an instruction for
matrix multiplication? It seems absurd to me, not to use that
instruction, if it is available.

73, Mario
-- 
Mario Klebsch						mario@klebsch.de

_______________________________________________
comp.lang.ada mailing list
comp.lang.ada@ada.eu.org
http://ada.eu.org/cgi-bin/mailman/listinfo/comp.lang.ada







^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Matrix Multiplication
  1999-12-16  0:00         ` Robert A Duff
@ 1999-12-16  0:00           ` William B. Clodius
  0 siblings, 0 replies; 81+ messages in thread
From: William B. Clodius @ 1999-12-16  0:00 UTC (permalink / raw)




Robert A Duff wrote:
> <snip>
> Anyway, there's an important non-technical issue: How much money are
> people willing to expend on making optimizing compilers.
> <snip>

It sometimes seeems likee its more than they are willing to spend on
correct compilers.




^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Matrix Multiplication
  1999-12-15  0:00         ` Gautier
@ 1999-12-16  0:00           ` Marin D. Condic
  1999-12-27  0:00             ` Jeffrey L Straszheim
  0 siblings, 1 reply; 81+ messages in thread
From: Marin D. Condic @ 1999-12-16  0:00 UTC (permalink / raw)


Gautier wrote:

> Where did you get the impression that the optimizer would miss this?

From you. :-) The claim that the "renames" would help, sort of presumes
that the optimizer might miss it.

> 
> But there are *rare* cases where such "invariants" are not provable as such,
> and a renames helps, even with GNAT. On the other side, some Ada 83 compilers
> do not seem to optimize these things (Compaq Ada, Alsys).
> 

I'll take your word for that. Still, it seems to look like in *most*
cases Ada and Fortran ought to be able to generate equally efficient
code for the garden variety floating point math and array reference
operations. The rest is arguing about the relative quality of different
compilers. That is the grand misunderstanding of so many people who say
"Ada makes slower code than Fortran for XYZ..."

MDC
-- 
=============================================================
Marin David Condic   - Quadrus Corporation -   1.800.555.3393
1015-116 Atlantic Boulevard, Atlantic Beach, FL 32233
http://www.quadruscorp.com/

Visit my web site at:  http://www.mcondic.com/

"Capitalism without failure is like religion without sin." 
        --  Allan Meltzer, Economist 
=============================================================




^ permalink raw reply	[flat|nested] 81+ messages in thread

* RE: Matrix Multiplication
  1999-12-15  0:00 Carlisle, Martin
  1999-12-15  0:00 ` Mario Klebsch
@ 1999-12-19  0:00 ` Robert Dewar
  1999-12-19  0:00 ` Robert Dewar
  2 siblings, 0 replies; 81+ messages in thread
From: Robert Dewar @ 1999-12-19  0:00 UTC (permalink / raw)


In article
<9BBB0C9AF506D311A68E00902745A537C236B1@fsxqpz04.usafa.af.mil>,
  comp.lang.ada@ada.eu.org wrote:
> The idea that matrix multiplication would always be inlined
seems absurd to
> me.  The simple implementation has O(n^3) running time.  Any
enhancement
> requires a fair amount of code, which I surely wouldn't want
replicated
> every time I did a matrix multiply.  The overhead of a
function call, in
> this case, would be so small as to be negligible.


Of course the naive n**3 implementation is appallingly slow for
large matrices because of cache effects, no one would use this
for large arrays, and indeed the rather complex algorithms
that WOULD be used are quite unsuitable for inlining.


Sent via Deja.com http://www.deja.com/
Before you buy.




^ permalink raw reply	[flat|nested] 81+ messages in thread

* RE: Matrix Multiplication
  1999-12-15  0:00 Carlisle, Martin
  1999-12-15  0:00 ` Mario Klebsch
  1999-12-19  0:00 ` Robert Dewar
@ 1999-12-19  0:00 ` Robert Dewar
  2 siblings, 0 replies; 81+ messages in thread
From: Robert Dewar @ 1999-12-19  0:00 UTC (permalink / raw)


In article
<9BBB0C9AF506D311A68E00902745A537C236B1@fsxqpz04.usafa.af.mil>,
  comp.lang.ada@ada.eu.org wrote:
> The idea that matrix multiplication would always be inlined
seems absurd to
> me.  The simple implementation has O(n^3) running time.  Any
enhancement
> requires a fair amount of code, which I surely wouldn't want
replicated
> every time I did a matrix multiply.  The overhead of a
function call, in
> this case, would be so small as to be negligible.


Of course the naive n**3 implementation is appallingly slow for
large matrices because of cache effects, no one would use this
for large arrays, and indeed the rather complex algorithms
that WOULD be used are quite unsuitable for inlining.


Sent via Deja.com http://www.deja.com/
Before you buy.




^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Matrix Multiplication
  1999-12-15  0:00 ` Mario Klebsch
@ 1999-12-19  0:00   ` Robert Dewar
  0 siblings, 0 replies; 81+ messages in thread
From: Robert Dewar @ 1999-12-19  0:00 UTC (permalink / raw)


In article <60l838.ocd.ln@ds9.klebsch.de>,
  mario@klebsch.de (Mario Klebsch) wrote:
> "Carlisle, Martin" <Martin.Carlisle@usafa.af.mil> writes:
>
> >The idea that matrix multiplication would always be inlined
seems absurd to
> >me.  The simple implementation has O(n^3) running time.
>
> What about compiling for a CPU, that does have an instruction
for
> matrix multiplication? It seems absurd to me, not to use that
> instruction, if it is available.


It may seem absurd, but it is likely the case that if you DO
have such an instruction it should not be used in many cases.
The secret of reasonably efficient code for modern CISC machines
is often to ignore many of the junk instructions (this is for
sure true on the Pentium for example!)


Sent via Deja.com http://www.deja.com/
Before you buy.




^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Matrix Multiplication
  1999-12-16  0:00           ` Marin D. Condic
@ 1999-12-27  0:00             ` Jeffrey L Straszheim
  0 siblings, 0 replies; 81+ messages in thread
From: Jeffrey L Straszheim @ 1999-12-27  0:00 UTC (permalink / raw)


Marin D. Condic wrote:
 
> I'll take your word for that. Still, it seems to look like in *most*
> cases Ada and Fortran ought to be able to generate equally efficient
> code for the garden variety floating point math and array reference
> operations. The rest is arguing about the relative quality of different
> compilers. That is the grand misunderstanding of so many people who say
> "Ada makes slower code than Fortran for XYZ..."

I probably don't know what I'm talking about here, but one big
advantage Fortran seems to have over languages in the C family
is its lack of aliasing. That is, in Fortran the compiler can
usually assume that an array element will only be accessed from
within that array, and can aggressively optimize.

For instance in C:

void some_function (int x[], int length, int *something)
{
 int k;
 for (k = 1; k < length; ++k) {
   *something = x[k] + x[k-1];
  } 
}

For this function, the compiler cannot:

1. Assume that 'length' is constant -- which might have allowed
   some optimization.
2. Remember, in a register, the current iteration's 'x[k]', which
   could be used as the next iteration's 'x[k-1]'.

Fortunately, 'k' is local. Had it been global (yuck) then the
compiler couldn't even make any assumptions about 'k'.

Given Ada's more restrictive use of and more strict typing, it should
do better than C, but perhaps not as well as Fortran for some such
loops.

All this being said, I understand that new Fortrans allow some limited
types of aliasing (or maybe they don't). If so perhaps even this
advantage is lost.

-- Jeffrey Straszheim          
-- Systems Engineer, Programmer
-- http://www.shadow.net/~stimuli
-- stimuli AT shadow DOT net




^ permalink raw reply	[flat|nested] 81+ messages in thread

* Does Ada need elemental functions to make it suitable for scientific work?
@ 2012-07-09 23:27 Nasser M. Abbasi
       [not found] ` <d78nv7dhf88bqv7hrd9eft231a4h2scs10@invalid.netcom.com>
                   ` (5 more replies)
  0 siblings, 6 replies; 81+ messages in thread
From: Nasser M. Abbasi @ 2012-07-09 23:27 UTC (permalink / raw)



I have been trying Ada to see how suitable it is for computational
work (I am studying finite elements, and wanted to try Ada for
basic programs).

And even though I find its type system a great asset to help
catch many errors, I find it a little awkward for programming on
arrays and matrices which ones does alot in scientific numerical
programming. The most important feature which I found missing is
that Ada functions do not automatically work on arrays and matrices.

In Fortran, we see

http://en.wikipedia.org/wiki/Fortran_95_language_features#Elemental_operations.2C_assignments_and_procedures

"Most intrinsic functions are elemental and Fortran 95
extends this feature to non-intrinsic procedures"

intrinsic functions in Fortran are those build-in (like
sin/cos, etc...)

Lets look at a very simple example to help explain what I mean.
I wanted to raise each entry in a vector to the second power.

In Fortran, I can just write V**2, where V is a vector. In Ada,
I can't do that. Since ** is not defined on this array type
and this number type.

I would have to write a loop to iterate over V and do V(I)**2 on
each element.

This for me, is a step backward. This is how Fortran77 was.

I know I am a newbie in Ada, and I could have overlooked a
simple solution to do this. But I really do not want to
write packages and define operators each time and for each type
I make to have it work as elemental function each time.

This is something that should be build into the language.

Here are two complete trivial examples of what I mean.

I was wondering if there is a chance that something like
this can be added to Ada like what was done for Fortran?

---- fortran -------------------------
!-- showing how to use Fortran for vectored operations
!-- equations work on vectors, no need for loop
program f08
   implicit none

   integer, parameter :: N = 7
   real   , parameter :: D(N) = [-0.2,1.0,1.5,3.0,-1.0,4.2,3.1]
   real   , parameter :: H(N) = [2.1,2.4,1.8,2.6,2.6,2.2,1.8]
   real               :: V(N)

   V = D**2 * H       ! all vectored operations
   print *, v
end program f08
---------------------------------
>gfortran f08.f90
>./a.out
   8.39999989E-02  2.4000001 4.0499997 23.400000  2.5999999  38.807995  17.297998

---------Ada ------------------
procedure foo2 is
   N : integer :=7;
   type array_t is array(1..N) OF float;
   
   D : constant array_t :=(-0.2,1.0,1.5,3.0,-1.0,4.2,3.1);
   H : constant array_t :=(2.1,2.4,1.8,2.6,2.6,2.2,1.8);
   V : array_t;
   
   begin
     V := D**2 * H;
end foo2;
-------------------

>gnatmake foo2.adb
gcc -c foo2.adb
foo2.adb:11:10: invalid operand types for operator "**"
foo2.adb:11:10: left operand has type "array_t" defined at line 4
foo2.adb:11:10: right operand has type universal integer
gnatmake: "foo2.adb" compilation error
>

--Nasser



^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
       [not found] ` <d78nv7dhf88bqv7hrd9eft231a4h2scs10@invalid.netcom.com>
@ 2012-07-10  4:22   ` Nasser M. Abbasi
  2012-07-10 14:26     ` Marco
  0 siblings, 1 reply; 81+ messages in thread
From: Nasser M. Abbasi @ 2012-07-10  4:22 UTC (permalink / raw)


On 7/9/2012 10:59 PM, Dennis Lee Bieber wrote:

> 	Also, unless things have changed, the FORTRAN language standard
> defines at least two levels of the language; the full language you are
> describing, and a subset implementation which not only may not have the
> vectorization features,

There is only one Fortran ISO standard, just like with Ada.

http://gcc.gnu.org/wiki/GFortranStandards

--Nasser



^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
  2012-07-09 23:27 Does Ada need elemental functions to make it suitable for scientific work? Nasser M. Abbasi
       [not found] ` <d78nv7dhf88bqv7hrd9eft231a4h2scs10@invalid.netcom.com>
@ 2012-07-10  4:24 ` gautier_niouzes
  2012-07-10  5:22   ` Ada novice
  2012-07-10 11:06 ` Simon Wright
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 81+ messages in thread
From: gautier_niouzes @ 2012-07-10  4:24 UTC (permalink / raw)
  Cc: nma

Le mardi 10 juillet 2012 01:27:54 UTC+2, Nasser M. Abbasi a écrit :

> In Fortran, I can just write V**2, where V is a vector. In Ada,
> I can&#39;t do that. Since ** is not defined on this array type
> and this number type.
> 
> I would have to write a loop to iterate over V and do V(I)**2 on
> each element.
> 
> This for me, is a step backward. This is how Fortran77 was.

Interesting example... How often do you do need that, frankly ?
The standard is there for... standard operations, like a * v (a is a number).
There are always things you might feel it is missing.
For instance the shortcuts "and then" and "or else" are missing in Fortran, and to my opinion it is something more cumbersome than the lack of v**2 in Ada.

G.



^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
  2012-07-10  4:24 ` gautier_niouzes
@ 2012-07-10  5:22   ` Ada novice
  2012-07-10  7:27     ` Dmitry A. Kazakov
                       ` (2 more replies)
  0 siblings, 3 replies; 81+ messages in thread
From: Ada novice @ 2012-07-10  5:22 UTC (permalink / raw)
  Cc: nma

On Tuesday, July 10, 2012 5:24:57 AM UTC+1, (unknown) wrote:
> Le mardi 10 juillet 2012 01:27:54 UTC+2, Nasser M. Abbasi a écrit :
> 
> &gt; In Fortran, I can just write V**2, where V is a vector. In Ada,
> &gt; I can&amp;#39;t do that. Since ** is not defined on this array type
> &gt; and this number type.
> &gt; 
> &gt; I would have to write a loop to iterate over V and do V(I)**2 on
> &gt; each element.
> &gt; 
> &gt; This for me, is a step backward. This is how Fortran77 was.
> 

> Interesting example... How often do you do need that, frankly ?

Believe me, for someone who do numerical computations on a daily basis, an operation like V**2 is a NECESSITY. It's a pity that Ada does not offer such a facility. I do not blame those who stick to Matlab (or Fortran?) for doing numerical computations.

YC




^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
  2012-07-10  5:22   ` Ada novice
@ 2012-07-10  7:27     ` Dmitry A. Kazakov
  2012-07-10  8:06     ` gautier_niouzes
       [not found]     ` <637de084-0e71-4077-a1c5-fc4200cad3cf@googlegroups.com>
  2 siblings, 0 replies; 81+ messages in thread
From: Dmitry A. Kazakov @ 2012-07-10  7:27 UTC (permalink / raw)


On Mon, 9 Jul 2012 22:22:58 -0700 (PDT), Ada novice wrote:

>> Interesting example... How often do you do need that, frankly ?
> 
> Believe me, for someone who do numerical computations on a daily basis, an
> operation like V**2 is a NECESSITY.

Where is a problem? "**" is a user-definable operator in Ada. Moreover, it
can be overloaded. Provided that V**2 is quite ambiguous. it could mean 

1. inner product
2. cross product
3. matrix: Transpose (V) * V
...

> It's a pity that Ada does not offer such a facility.

Ada does it. Maybe some library X does not, then take Y or extend X. 

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
  2012-07-10  5:22   ` Ada novice
  2012-07-10  7:27     ` Dmitry A. Kazakov
@ 2012-07-10  8:06     ` gautier_niouzes
       [not found]     ` <637de084-0e71-4077-a1c5-fc4200cad3cf@googlegroups.com>
  2 siblings, 0 replies; 81+ messages in thread
From: gautier_niouzes @ 2012-07-10  8:06 UTC (permalink / raw)
  Cc: nma

Le mardi 10 juillet 2012 07:22:58 UTC+2, Ada novice a écrit :

> Believe me, for someone who do numerical computations on a daily basis, an operation like V**2 is a NECESSITY. It&#39;s a pity that Ada does not offer such a facility. I do not blame those who stick to Matlab (or Fortran?) for doing numerical computations.
Wait... There are *such* facilities and several built-in operations *like* that:
http://www.ada-auth.org/standards/12rm/html/RM-G-3-1.html

Apparently it is not enough. But it is possible to make a proposal for the next standard (and shorter term, for the next GNAT version).

The snag with that *specific* operation, v**2 meaning "v(i)**2 for each i", is that it is counter-intuitive from a linear algebra point of view. You would rather expect v**2 = v*v. Now another snag is should "*" be the dot or the cross product ?...

These kind of things are thought-out carefully before landing into an Ada standard...

G.



^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
       [not found]     ` <637de084-0e71-4077-a1c5-fc4200cad3cf@googlegroups.com>
@ 2012-07-10  8:39       ` Nasser M. Abbasi
  2012-07-10  8:58         ` Ada novice
                           ` (3 more replies)
  2012-07-10 12:46       ` Brian Drummond
  1 sibling, 4 replies; 81+ messages in thread
From: Nasser M. Abbasi @ 2012-07-10  8:39 UTC (permalink / raw)


On 7/10/2012 3:02 AM, gautier.de.montmollin@gmail.com wrote:
> Le mardi 10 juillet 2012 07:22:58 UTC+2, Ada novice a �crit :
>
>> Believe me, for someone who do numerical computations on a daily basis, an operation
>like V**2 is a NECESSITY. It&#39;s a pity that Ada does not offer such a facility.
>I do not blame those who stick to Matlab (or Fortran?) for doing numerical computations.
>
> Wait... There are *such* facilities and several built-in operations *like* that:
> http://www.ada-auth.org/standards/12rm/html/RM-G-3-1.html
>
> Apparently it is not enough. But it is possible to make a proposal for
>  the next standard (and short term for the next GNAT version).
>

That is a start. But not enough by any means. All functions should be
vectorized. Even user defined ones.

In Ada, I can't even take the sin of a vector

----------------------
with Ada.Numerics.Elementary_Functions;
use Ada.Numerics.Elementary_Functions;

procedure foo3 is
   type array_t is array(1..5) OF float;
   D : constant array_t :=(0.1,0.2,0.3,0.4,0.5);
   V : array_t;
   
   begin
     -- V := sin(D);  -- ERROR

     for I in D'Range loop  -- must use a LOOP
         V(I) := sin(D(I));
     end loop;
        
end foo3;
--------------------

I can ofcourse overload sin, hide the loop inside my own sine
and then do

    V:= myPackage.sin(D);

All of this is possible in Ada. It is just a little (alot?)
more effort compared to what is out there.

octave/Matlab:
--------------------------
D=[0.1,0.2,0.3,0.4,0.5];
sin(D)

ans =
     0.0998    0.1987    0.2955    0.3894    0.4794
---------------------------

> The snag with that *specific* operation, v**2 meaning "v(i)**2 for each i",
>is that it is counter-intuitive from a linear algebra point of view. You
>would rather expect v**2 = v*v. Now another snag is should "*" be the
>dot or the cross product ?...
>
> These kind of things are thought-out carefully before landing into an Ada standard...
>

But these issue have been solved long time time. (like maybe
30 years ago? and are well defined for all cases:

http://www.mathworks.com/help/techdoc/ref/arithmeticoperators.html

There are element-wise operators, and operators that work
on the whole vector or matrix.

In octave/Matlab world, V.^2 means element-wise. (notice
the ".")

-------------------------
octave:3> V=[1 2 3 4];
octave:4> v.^2
ans =

     1    4    9   16
--------------------------

Otherwise, it becomes standard matrix operation.
A^2 means A*A

------------------------
octave:17> A=[1 2;3 4]
A =
    1   2
    3   4

octave:18> A^2
ans =
     7   10
    15   22

octave:19> A*A
ans =
     7   10
    15   22
---------------------

To do dot product, it has its own operator, called dot() !

----------------------
octave:5> dot(V,V)
ans =  30
-------------------------

To do cross product, it has its own operator, called cross() !

-----------------------
octave:9> D=[1 2 3]; V=[3 4 5]; cross(D,V)
ans =

   -2   4  -2
-------------------

btw, in Mathematica, it is the other way around. V*V does
element by element multiplication, and V.V does matrix product.
It has also Dot[] and Cross[] and all functions are vectorized,
and all user defined functions can be vectorized by simply
giving them the attribute Listable.

--Nasser



^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
  2012-07-10  8:39       ` Nasser M. Abbasi
@ 2012-07-10  8:58         ` Ada novice
  2012-07-10  9:07         ` Dmitry A. Kazakov
                           ` (2 subsequent siblings)
  3 siblings, 0 replies; 81+ messages in thread
From: Ada novice @ 2012-07-10  8:58 UTC (permalink / raw)
  Cc: nma

On Tuesday, July 10, 2012 9:39:58 AM UTC+1, Nasser M. Abbasi wrote:

> That is a start. But not enough by any means. All functions should be
> vectorized. Even user defined ones.
> 

Good examples to illustrate what other mathematical languages offer and what people doing calculations may be / are missing daily with Ada. 

Let's hope for some changes in this respect in the next Ada standard.


YC



^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
  2012-07-10  8:39       ` Nasser M. Abbasi
  2012-07-10  8:58         ` Ada novice
@ 2012-07-10  9:07         ` Dmitry A. Kazakov
  2012-07-10  9:21           ` Nasser M. Abbasi
  2012-07-12  0:31           ` robin.vowels
  2012-07-12  0:22         ` robin.vowels
  2012-07-20  1:51         ` Randy Brukardt
  3 siblings, 2 replies; 81+ messages in thread
From: Dmitry A. Kazakov @ 2012-07-10  9:07 UTC (permalink / raw)


On Tue, 10 Jul 2012 03:39:58 -0500, Nasser M. Abbasi wrote:

> That is a start. But not enough by any means. All functions should be
> vectorized.

It is ill-defined. E.g. exp(A), where A is a matrix. Is exp(A) a matrix of
exponents or exponent matrix? To me it is the second. A similar example for
vectors: abs V. Is it a vector of absolute values, or maybe ||V||?

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
  2012-07-10  9:07         ` Dmitry A. Kazakov
@ 2012-07-10  9:21           ` Nasser M. Abbasi
  2012-07-10  9:26             ` Nasser M. Abbasi
                               ` (2 more replies)
  2012-07-12  0:31           ` robin.vowels
  1 sibling, 3 replies; 81+ messages in thread
From: Nasser M. Abbasi @ 2012-07-10  9:21 UTC (permalink / raw)


On 7/10/2012 4:07 AM, Dmitry A. Kazakov wrote:
> On Tue, 10 Jul 2012 03:39:58 -0500, Nasser M. Abbasi wrote:
>
>> That is a start. But not enough by any means. All functions should be
>> vectorized.
>
> It is ill-defined. E.g. exp(A), where A is a matrix. Is exp(A) a matrix of
> exponents or exponent matrix?

Solved:

-----------------------
octave:21> exp(A)
ans =

     2.7183    7.3891
    20.0855   54.5982

octave:22> expm(A)
ans =

     51.969    74.737
    112.105   164.074
-----------------------------

>To me it is the second.

It is simply the way it is defined.

Just like when I read a paper, I first look for the definitions
of symbols and terms used by the author. one does not have to
guess. Everything is defined. To do these in Ada, Ada will
define them as it wishes. As long as there are there, and
well defined, no one will complain. All of these issue have
been solved in many other languages.

>A similar example for
> vectors: abs V. Is it a vector of absolute values, or maybe ||V||?
>

Also defined.

-------------------------------
octave:23> A=rand(3,1)
A =
    0.45896
    0.61557
    0.26697

octave:24> abs(V)
ans =
    3   4   5

octave:25> norm(V)
ans =  7.0711
-------------------------

--Nasser






^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
  2012-07-10  9:21           ` Nasser M. Abbasi
@ 2012-07-10  9:26             ` Nasser M. Abbasi
  2012-07-10  9:50             ` Dmitry A. Kazakov
  2012-07-20  1:56             ` Randy Brukardt
  2 siblings, 0 replies; 81+ messages in thread
From: Nasser M. Abbasi @ 2012-07-10  9:26 UTC (permalink / raw)


On 7/10/2012 4:21 AM, Nasser M. Abbasi wrote:

> -------------------------------
> octave:23> A=rand(3,1)
> A =
>      0.45896
>      0.61557
>      0.26697
>
> octave:24> abs(V)
> ans =
>      3   4   5
>
> octave:25> norm(V)
> ans =  7.0711
> -------------------------

meant to use A, not v above ofcourse:

----------------------
octave:26> abs(A)
ans =
    0.45896
    0.61557
    0.26697

octave:27> norm(A)
ans =  0.81293
----------------------

--Nasser




^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
  2012-07-10  9:21           ` Nasser M. Abbasi
  2012-07-10  9:26             ` Nasser M. Abbasi
@ 2012-07-10  9:50             ` Dmitry A. Kazakov
  2012-07-20  1:56             ` Randy Brukardt
  2 siblings, 0 replies; 81+ messages in thread
From: Dmitry A. Kazakov @ 2012-07-10  9:50 UTC (permalink / raw)


On Tue, 10 Jul 2012 04:21:17 -0500, Nasser M. Abbasi wrote:

> On 7/10/2012 4:07 AM, Dmitry A. Kazakov wrote:
>> On Tue, 10 Jul 2012 03:39:58 -0500, Nasser M. Abbasi wrote:
>>
>>> That is a start. But not enough by any means. All functions should be
>>> vectorized.
>>
>> It is ill-defined. E.g. exp(A), where A is a matrix. Is exp(A) a matrix of
>> exponents or exponent matrix?

[...]

> Just like when I read a paper, I first look for the definitions
> of symbols and terms used by the author.

Exactly. This is how you define it, it is not how LaTeX or TeX does. Since
there is no unambiguous semantics of f(), the language shall not introduce
them.

If you want scalar f() to apply to a matrix in some certain way, just
define that in Ada. You can even do it in generic way:

1. Using generics:

generic
   with function Scalar_F (X : Float) return Float;
function Generic_Vector_F (X : Vector) return Vector;

function Generic_Vector_F (X : Vector) return Vector is
begin
   return Result : Vector (X'Range) do
      for I in X'Range loop
         Result (I) := Scalar_F (X (I));
      end loop;
   end return;
end Generic_Vector_F;
 
function Sin is new Generic_Vector_F (Sin);
function Cos is new Generic_Vector_F (Cos);
...
-------------------------------------------------------------
2. Using functional composition:

function "*"
   (  F : not null access function (X : Float) return Float;
      X : Vector
   )  return Vector is
begin
   return Result : Vector (X'Range) do
      for I in X'Range loop
         Result (I) := F (X (I));
      end loop;
   end return;
end "*";

Used as:

sin'Access * V  -- Applies sin to V

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
  2012-07-09 23:27 Does Ada need elemental functions to make it suitable for scientific work? Nasser M. Abbasi
       [not found] ` <d78nv7dhf88bqv7hrd9eft231a4h2scs10@invalid.netcom.com>
  2012-07-10  4:24 ` gautier_niouzes
@ 2012-07-10 11:06 ` Simon Wright
  2012-07-10 11:59 ` Georg Bauhaus
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 81+ messages in thread
From: Simon Wright @ 2012-07-10 11:06 UTC (permalink / raw)


"Nasser M. Abbasi" <nma@12000.org> writes:

> In Fortran, I can just write V**2, where V is a vector. In Ada, I
> can't do that. Since ** is not defined on this array type and this
> number type.
>
> I would have to write a loop to iterate over V and do V(I)**2 on each
> element.
>
> This for me, is a step backward. This is how Fortran77 was.
>
> I know I am a newbie in Ada, and I could have overlooked a simple
> solution to do this. But I really do not want to write packages and
> define operators each time and for each type I make to have it work as
> elemental function each time.
>
> This is something that should be build into the language.

The relevant AI is, I think, AI95-00296[1]. The !discussion says that

   "The component by component operations of * / and ** on vectors have
   been deleted [from ISO/IEC 13813] on the grounds that they are not
   useful. (They might be useful for manipulating arrays in general but
   we are concerned with arrays used as vectors for linear algebra.)

   "Operations for vector products were considered but not added. This
   is because, as usually defined, they only apply in three-dimensional
   space."

GNAT would allow you to implement them fairly easily using
System.Generic_Array_Operations (which is a GNAT internal package that
survived the transition to GNAT GPL 2012!).

You could make the case that they are in fact useful and should be
incorporated, but of course that won't happen until the 202x revision.

[1] http://www.ada-auth.org/cgi-bin/cvsweb.cgi/ais/ai-00296.txt?rev=1.23



^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
  2012-07-09 23:27 Does Ada need elemental functions to make it suitable for scientific work? Nasser M. Abbasi
                   ` (2 preceding siblings ...)
  2012-07-10 11:06 ` Simon Wright
@ 2012-07-10 11:59 ` Georg Bauhaus
  2012-07-10 12:20 ` Brian Drummond
  2012-07-10 19:52 ` Ada novice
  5 siblings, 0 replies; 81+ messages in thread
From: Georg Bauhaus @ 2012-07-10 11:59 UTC (permalink / raw)


On 10.07.12 01:27, Nasser M. Abbasi wrote:
> In Fortran, I can just write V**2, where V is a vector. In Ada,
> I can't do that. Since ** is not defined on this array type
> and this number type.

As Gautier pointed out, a general purpose language must be better
than convenient for some. What the convenient operations might offer
to a small niche of science is not good enough, even when one
personally considers them important.

You mentioned operations on vectors. Mine would include
operations like

- manipulate only values of a vector at certain indices,
  or produce a result from some components only,
  fold style, or something even more involved.

Such as: Make all negative values 0.0.

It is possible to do this using array operations and has been ever
since APL (late 1960s), and R, and others, by composing functions.
But APL programmers and others have explained that there is
only a slight chance that composed operations will be efficient
since many results will be computed needlessly, wasting real time
and real memory. If efficiency is a concern, one might have to
dismiss such designs of inefficient computations. This is s first
trade-off.

For the above computation, then, I might need some kind of loop,
be it expressed as a simple loop, or as a clever, if intricate,
tail recursive function. This is easily done using languages
Ada, or Fortran, or C++, language that, incidentally, are used
to write advanced mathematician's (or physicist's) calculators
like Octave. The languages enable methods of controlling the
operational aspects of an algorithm that vector operators will
by definition take out of the hands of programmers.

When writing Ada, I will expect to be able to look at the
constituent parts of a computation. For example, my program might
need to decide whether or not a computation can continue, at least
for parts of the vector, when one component has an unusable
intermediate result. For example,

   Sum (V);  --  V = Vector'(1.2, 3.4, broken, ..., 11.12);

If programmers should be able to attach their code between the
steps that operators perform (be they in parallel or in sequence),
then I imagine a library based "operator" could at least be made
very flexible. Still, compilers could select more optimal
alternatives where possible.



^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
  2012-07-09 23:27 Does Ada need elemental functions to make it suitable for scientific work? Nasser M. Abbasi
                   ` (3 preceding siblings ...)
  2012-07-10 11:59 ` Georg Bauhaus
@ 2012-07-10 12:20 ` Brian Drummond
  2012-07-10 19:52 ` Ada novice
  5 siblings, 0 replies; 81+ messages in thread
From: Brian Drummond @ 2012-07-10 12:20 UTC (permalink / raw)


On Mon, 09 Jul 2012 18:27:54 -0500, Nasser M. Abbasi wrote:

On Mon, 09 Jul 2012 18:27:54 -0500, Nasser M. Abbasi wrote:

> The most important feature which I found missing is that Ada functions
> do not automatically work on arrays and matrices.

Everyone moving from a domain-specific language to a general one will have
a "most important missing feature".

> In Fortran, I can just write V**2, where V is a vector. In Ada,
> I can't do that. Since ** is not defined on this array type and this
> number type.
> 
> ... I really do not want to write packages and
> define operators each time and for each type I make to have it work as
> elemental function each time.
> 
> This is something that should be build into the language.

No, it is something that can be added to the language where necessary, 
with the minimum of wasted effort.

For example, write ONCE, a generic ** function, which you can instantiate 
with any (numeric!) array type.

Now a package of such generic functions, extending
Numerics.Generic_Real_Arrays, might be a useful addition to Ada.

O course it would have to be clear from the package documentation which of
the possible meanings of "**" was implemented. You might want separate
packages implementing element_wise operators or array operators.

Or you may want different ad-hoc operator symbols, like "^" or ".^" in
your Octave examples, to distinguish the different meanings. (But in your
examples, even the domain-specific languages can't agree to use the same
symbols!)

This would be one use case for my recent proposal to allow user-
defined operator symbols in a future Ada. Given a few restrictions, (e.g.
on operator precedence) it looked like a relatively harmless addition to 
Ada, and fairly general in application (not specific to one domain).

- Brian



^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
       [not found]     ` <637de084-0e71-4077-a1c5-fc4200cad3cf@googlegroups.com>
  2012-07-10  8:39       ` Nasser M. Abbasi
@ 2012-07-10 12:46       ` Brian Drummond
  1 sibling, 0 replies; 81+ messages in thread
From: Brian Drummond @ 2012-07-10 12:46 UTC (permalink / raw)


On Tue, 10 Jul 2012 01:02:13 -0700, gautier.de.montmollin wrote:

> Le mardi 10 juillet 2012 07:22:58 UTC+2, Ada novice a écrit :
> 
>> Believe me, for someone who do numerical computations on a daily basis,
>> an operation like V**2 is a NECESSITY. It&#39;s a pity that Ada does
>> not offer such a facility. I do not blame those who stick to Matlab (or
>> Fortran?) for doing numerical computations.
> 
> Wait... There are *such* facilities and several built-in operations
> *like* that:
> http://www.ada-auth.org/standards/12rm/html/RM-G-3-1.html
> 
> Apparently it is not enough. But it is possible to make a proposal for
> the next standard (and short term for the next GNAT version).

Furthermore, they are not really "built-in", so there is no need to go 
back to the standards authority if you feel they are lacking. The source 
for Ada.Numerics.Generic_Real_Array is right there, and contains no magic.

Adding "**" to suit your own purposes should be no more complicated than
the implementation of "+" in Ada.Numerics.Generic_Real_Array.

So, how does that work?

Ada.Numerics.Generic_Real_Array is found (on my machine) at
/usr/lib/gcc/i586-suse-linux/4.6/adainclude/a-ngrear.ads (specification) 
and /usr/lib/gcc/i586-suse-linux/4.6/adainclude/a-ngrear.adb (body)

The specification is
   function "+"   (Right : Real_Vector)       return Real_Vector;
...
   pragma Inline_Always ("+");

The body - you might expect a loop , performing addition on each element.
Well, not quite...

----------------------------------------------------------------------
with System.Generic_Array_Operations; use System.Generic_Array_Operations;

...
   --  Instantiating the following subprograms directly would lead to
   --  name clashes, so use a local package.

   package Instantiations is
...
      function "+" is new
        Vector_Vector_Elementwise_Operation
          (Left_Scalar   => Real'Base,
           Right_Scalar  => Real'Base,
           Result_Scalar => Real'Base,
           Left_Vector   => Real_Vector,
           Right_Vector  => Real_Vector,
           Result_Vector => Real_Vector,
           Operation     => "+");
...
   end Instantiations;

   function "+" (Left, Right : Real_Vector) return Real_Vector
      renames Instantiations."+";

----------------------------------------------------------------------

So it uses a generic function from System.Generic_Array_Operations to 
loop over the vector, applying the Operation argument to each element.

And this generic function is available for you to use in the same manner.
(There are similar ones for unary operations like sin(), and others for 
2D matrix operations, etc)

So you can create a package along the same lines, and add **, sin, etc to 
it. Once you have the package in place, adding more matrix operations of 
your own is trivial.

What does this generic function look like?
Excerpt from System.Generic_Array_Operations - on this system, at
/usr/lib/gcc/i586-suse-linux/4.6/adainclude/s-gearop.ads(specification)
--------------------------------
   generic
      type Left_Scalar is private;
      type Right_Scalar is private;
      type Result_Scalar is private;
      type Left_Vector is array (Integer range <>) of Left_Scalar;
      type Right_Vector is array (Integer range <>) of Right_Scalar;
      type Result_Vector is array (Integer range <>) of Result_Scalar;
      with function Operation
             (Left  : Left_Scalar;
              Right : Right_Scalar) return Result_Scalar;
   function Vector_Vector_Elementwise_Operation
     (Left  : Left_Vector;
      Right : Right_Vector) return Result_Vector;
--------------------------------
which should be enough information to let you use it...

If you are curious about its implementation, look at
/usr/lib/gcc/i586-suse-linux/4.6/adainclude/s-gearop.adb (body)
--------------------------------
   -----------------------------------------
   -- Vector_Vector_Elementwise_Operation --
   -----------------------------------------

   function Vector_Vector_Elementwise_Operation
     (Left  : Left_Vector;
      Right : Right_Vector) return Result_Vector
   is
      R : Result_Vector (Left'Range);

   begin
      if Left'Length /= Right'Length then
         raise Constraint_Error with
            "vectors are of different length in elementwise operation";
      end if;

      for J in R'Range loop
         R (J) := Operation (Left (J), Right (J - R'First + Right'First));
      end loop;

      return R;
   end Vector_Vector_Elementwise_Operation;
--------------------------------
which is of course the expected loop, plus a sanity check.

- Brian



^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
  2012-07-10  4:22   ` Nasser M. Abbasi
@ 2012-07-10 14:26     ` Marco
  0 siblings, 0 replies; 81+ messages in thread
From: Marco @ 2012-07-10 14:26 UTC (permalink / raw)
  Cc: nma

On Monday, July 9, 2012 9:22:59 PM UTC-7, Nasser M. Abbasi wrote:

> 
> There is only one Fortran ISO standard, just like with Ada.
> 
> http://gcc.gnu.org/wiki/GFortranStandards

 That is not reality based. When a language standard is published there will effectively be code bases around that need to conform to that older standard and compilers need to be able to handle that. For example, the Intel Fortran compiler deals nicely with existing F77 code mixed with newer F90 and later.

Just because the Ada 2012 standard exists does not mean 100% of the Ada community will adopt it today or even tomorrow.




^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
  2012-07-09 23:27 Does Ada need elemental functions to make it suitable for scientific work? Nasser M. Abbasi
                   ` (4 preceding siblings ...)
  2012-07-10 12:20 ` Brian Drummond
@ 2012-07-10 19:52 ` Ada novice
  2012-07-11  8:41   ` gautier_niouzes
  5 siblings, 1 reply; 81+ messages in thread
From: Ada novice @ 2012-07-10 19:52 UTC (permalink / raw)
  Cc: nma

On Tuesday, July 10, 2012 12:27:54 AM UTC+1, Nasser M. Abbasi wrote:
> I have been trying Ada to see how suitable it is for computational
> work (I am studying finite elements, and wanted to try Ada for
> basic programs).


In finite element formulations, one will usually end up with sparse matrices. I do not know whether Ada has support for such matrices. Matlab for instance has dedicated solver(s) for the system Ax=b when A is a sparse matrix. I think beside storage consideration, sparse matrices solvers are faster (I can be wrong though).

There was a discussion in one posts on CLA here some two years back about sparse matrices. Forgotten which thread it was though.

YC 



^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
  2012-07-10 19:52 ` Ada novice
@ 2012-07-11  8:41   ` gautier_niouzes
  2012-07-11  9:42     ` Ken Thomas
  0 siblings, 1 reply; 81+ messages in thread
From: gautier_niouzes @ 2012-07-11  8:41 UTC (permalink / raw)
  Cc: nma

Ada novice :

> In finite element formulations, one will usually end up with sparse matrices. I do not know whether Ada has support for such matrices. Matlab for instance has dedicated solver(s) for the system Ax=b when A is a sparse matrix. I think beside storage consideration, sparse matrices solvers are faster (I can be wrong though).

They are fast but it all depends on the problem and the conditioning of the matrix. Sometimes you are also well served by band matrices and a good algorithm to rearrange the node numbering so that nodes that are close in the mesh end up with close indices and then the matrix has fewer non-zero diagonal bands around the main diagonal.
Choice of solvers for sparse matrices are limited - because they need to. e.g. a LU decomposition would require to know where are the non-zeros for having L and U stored as sparse matrices. And probably they are not sparse at all...

You find a sparse matrix package and appropriate solvers in Mathpaqs pack there:
  http://sf.net/projects/mathpaqs/
______________________________________________________________________________
Gautier's Ada programming -- http://gautiersblog.blogspot.com/search/label/Ada 
NB: follow the above link for a valid e-mail address



^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
  2012-07-11  8:41   ` gautier_niouzes
@ 2012-07-11  9:42     ` Ken Thomas
  0 siblings, 0 replies; 81+ messages in thread
From: Ken Thomas @ 2012-07-11  9:42 UTC (permalink / raw)


On Wednesday, July 11, 2012 9:41:50 AM UTC+1, (unknown) wrote:
> Ada novice :
> 
> &gt; In finite element formulations, one will usually end up with sparse matrices. I do not know whether Ada has support for such matrices. Matlab for instance has dedicated solver(s) for the system Ax=b when A is a sparse matrix. I think beside storage consideration, sparse matrices solvers are faster (I can be wrong though).
> 
> They are fast but it all depends on the problem and the conditioning of the matrix. Sometimes you are also well served by band matrices and a good algorithm to rearrange the node numbering so that nodes that are close in the mesh end up with close indices and then the matrix has fewer non-zero diagonal bands around the main diagonal.
> Choice of solvers for sparse matrices are limited - because they need to. e.g. a LU decomposition would require to know where are the non-zeros for having L and U stored as sparse matrices. And probably they are not sparse at all...
> 
> You find a sparse matrix package and appropriate solvers in Mathpaqs pack there:
>   http://sf.net/projects/mathpaqs/
> ______________________________________________________________________________
> Gautier&#39;s Ada programming -- http://gautiersblog.blogspot.com/search/label/Ada 
> NB: follow the above link for a valid e-mail address

I have written finite element codes in Ada2005 and sparse matrices are essential. Without these only small toy problems are possible. There are several  stages in the method:

Given a FEM mesh, assemble mass and stiffness  matrices. The matrix structure is unknown so I used an instantiation of Ada.Containers.Ordered_Maps. The Key_Type is the indices and the Element_Type a Real type. The ordering function can give row or column ordering.

The next stage is the linear algebra; both linear equations and eigenvalues are important. The choices are direct solvers for example http://www.cise.ufl.edu/research/sparse/umfpack
or iterative solvers. Writing direct solvers is very difficult (I have tried) and the best advice is to obtain or write bindings to some of the codes.

For iterative solvers, you will need good preconditioning. 

Ken




^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
  2012-07-10  8:39       ` Nasser M. Abbasi
  2012-07-10  8:58         ` Ada novice
  2012-07-10  9:07         ` Dmitry A. Kazakov
@ 2012-07-12  0:22         ` robin.vowels
  2012-07-20  1:51         ` Randy Brukardt
  3 siblings, 0 replies; 81+ messages in thread
From: robin.vowels @ 2012-07-12  0:22 UTC (permalink / raw)
  Cc: nma

On Tuesday, 10 July 2012 18:39:58 UTC+10, Nasser M. Abbasi  wrote:
> On 7/10/2012 3:02 AM, gautier.de...@gmail.com wrote:
> &gt; Le mardi 10 juillet 2012 07:22:58 UTC+2, Ada novice a écrit :
> &gt;
> &gt;&gt; Believe me, for someone who do numerical computations on a daily basis, an operation
> &gt;like V**2 is a NECESSITY. It&amp;#39;s a pity that Ada does not offer such a facility.
> &gt;I do not blame those who stick to Matlab (or Fortran?) for doing numerical computations.
> &gt;
> &gt; Wait... There are *such* facilities and several built-in operations *like* that:
> &gt; http://www.ada-auth.org/standards/12rm/html/RM-G-3-1.html
> &gt;
> &gt; Apparently it is not enough. But it is possible to make a proposal for
> &gt;  the next standard (and short term for the next GNAT version).
> &gt;
> 
> That is a start. But not enough by any means. All functions should be
> vectorized. Even user defined ones.
> 
> In Ada, I can&#39;t even take the sin of a vector

But you can in PL/I.

> ----------------------
> with Ada.Numerics.Elementary_Functions;
> use Ada.Numerics.Elementary_Functions;
> 
> procedure foo3 is
>    type array_t is array(1..5) OF float;
>    D : constant array_t :=(0.1,0.2,0.3,0.4,0.5);
>    V : array_t;
>    
>    begin
>      -- V := sin(D);  -- ERROR
> 
>      for I in D&#39;Range loop  -- must use a LOOP
>          V(I) := sin(D(I));
>      end loop;
>         
> end foo3;
> --------------------
> 
> I can ofcourse overload sin, hide the loop inside my own sine
> and then do
> 
>     V:= myPackage.sin(D);
> 
> All of this is possible in Ada. It is just a little (alot?)
> more effort compared to what is out there.
> 
> octave/Matlab:
> --------------------------
> D=[0.1,0.2,0.3,0.4,0.5];
> sin(D)
> 
> ans =
>      0.0998    0.1987    0.2955    0.3894    0.4794
> ---------------------------

and PL/I.

> &gt; The snag with that *specific* operation, v**2 meaning &quot;v(i)**2 for each i&quot;,
> &gt;is that it is counter-intuitive from a linear algebra point of view. You
> &gt;would rather expect v**2 = v*v. Now another snag is should &quot;*&quot; be the
> &gt;dot or the cross product ?...
> &gt;
> &gt; These kind of things are thought-out carefully before landing into an Ada standard...
> &gt;
> 
> But these issue have been solved long time time. (like maybe
> 30 years ago? and are well defined for all cases:

More like 45 years ago,
for PL/I has had that facility since 1966.

For vectors, matrices (in fact multi-dimensional arrays),
and for cross-sections of arrays.



^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
  2012-07-10  9:07         ` Dmitry A. Kazakov
  2012-07-10  9:21           ` Nasser M. Abbasi
@ 2012-07-12  0:31           ` robin.vowels
  2012-07-12  7:12             ` Dmitry A. Kazakov
  1 sibling, 1 reply; 81+ messages in thread
From: robin.vowels @ 2012-07-12  0:31 UTC (permalink / raw)
  Cc: mailbox

On Tuesday, 10 July 2012 19:07:05 UTC+10, Dmitry A. Kazakov  wrote:
> On Tue, 10 Jul 2012 03:39:58 -0500, Nasser M. Abbasi wrote:
> 
> &gt; That is a start. But not enough by any means. All functions should be
> &gt; vectorized.
> 
> It is ill-defined. E.g. exp(A), where A is a matrix. Is exp(A) a matrix of
> exponents or exponent matrix?

In languages that provide whole array operations (i.e.,
element-by-element operations -- such as PL/I and Fortran),
it is the former.

BTW., I think you mean matrix exponential, which is a far
less common operation than exp(A) or e**A(i) for i = 1 to n,
and would be written MATRIX_EXPONENTIAL or some such,
just as matrix multiplication would be written MATRIX_MULT
or some such, to distinguish it from the more common
element-by-element product.

> To me it is the second. A similar example for
> vectors: abs V. Is it a vector of absolute values, or maybe ||V||?

For element-by-element operations, it means the first,
as in PL/I and Fortran.



^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
  2012-07-12  0:31           ` robin.vowels
@ 2012-07-12  7:12             ` Dmitry A. Kazakov
  2012-07-29 13:39               ` Robin Vowels
  0 siblings, 1 reply; 81+ messages in thread
From: Dmitry A. Kazakov @ 2012-07-12  7:12 UTC (permalink / raw)


On Wed, 11 Jul 2012 17:31:33 -0700 (PDT), robin.vowels@gmail.com wrote:

>> It is ill-defined. E.g. exp(A), where A is a matrix. Is exp(A) a matrix of
>> exponents or exponent matrix?
> 
> In languages that provide whole array operations (i.e.,
> element-by-element operations -- such as PL/I and Fortran),
> it is the former.
> 
> BTW., I think you mean matrix exponential, which is a far
> less common operation than exp(A) or e**A(i) for i = 1 to n,

In linear algebra, provided matrices mean matrices, per-element operation
just does not make any sense. Exp(A), as well as power series are fairly
common in spectral analysis.

> and would be written MATRIX_EXPONENTIAL or some such,
> just as matrix multiplication would be written MATRIX_MULT
> or some such, to distinguish it from the more common
> element-by-element product.

My FORTRAN and PL/1 are quite rusty, but even these incredibly poor
languages did not define multiplication for matrices that way. In fact they
just had no matrices last time I used either.

I remember one library for sparse matrices in FORTRAN-IV, doing LU
decomposition and other stuff. It was quite fun. The library was in fact
very well-designed, but since FORTRAN-IV lacked even elementary data types,
they did all memory management required using INTEGER*4 as an index in one
huge REAL*4 array, serving as a memory pool.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
  2012-07-10  8:39       ` Nasser M. Abbasi
                           ` (2 preceding siblings ...)
  2012-07-12  0:22         ` robin.vowels
@ 2012-07-20  1:51         ` Randy Brukardt
  2012-07-29 13:53           ` Robin Vowels
  3 siblings, 1 reply; 81+ messages in thread
From: Randy Brukardt @ 2012-07-20  1:51 UTC (permalink / raw)


[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain, Size: 3093 bytes --]

"Nasser M. Abbasi" <nma@12000.org> wrote in message 
news:jtgpos$20g$2@speranza.aioe.org...
> On 7/10/2012 3:02 AM, gautier.de.montmollin@gmail.com wrote:
>> Le mardi 10 juillet 2012 07:22:58 UTC+2, Ada novice a �crit :
>>
>>> Believe me, for someone who do numerical computations on a daily basis, 
>>> an operation
>>like V**2 is a NECESSITY. It&#39;s a pity that Ada does not offer such a 
>>facility.
>>I do not blame those who stick to Matlab (or Fortran?) for doing numerical 
>>computations.
>>
>> Wait... There are *such* facilities and several built-in operations 
>> *like* that:
>> http://www.ada-auth.org/standards/12rm/html/RM-G-3-1.html
>>
>> Apparently it is not enough. But it is possible to make a proposal for
>>  the next standard (and short term for the next GNAT version).
>>
>
> That is a start. But not enough by any means. All functions should be
> vectorized. Even user defined ones.

That would be a complete change in philosophy for Ada. The expectation is 
that everything that can reasonably be done in libraries will be done that 
way. Note that Ada didn't add advanced containers directly in the syntax.

It might be reasonable to propose to add additional operators to the Ada 
language to make this more possible. We did look at that for Ada 2005 and 
decided not to do it, mainly because most of the interesting operators are 
not unary or binary. (Think integration and differentiation.)

> In Ada, I can't even take the sin of a vector

Of course you can, with a trivial loop.

     for E of Vect loop
           E := Sin(E);
     end loop;

And this has the distinct advantage of making it clear to other readers what 
it is that you are doing.

> ----------------------
> with Ada.Numerics.Elementary_Functions;
> use Ada.Numerics.Elementary_Functions;
>
> procedure foo3 is
>   type array_t is array(1..5) OF float;
>   D : constant array_t :=(0.1,0.2,0.3,0.4,0.5);
>   V : array_t;
>   begin
>     -- V := sin(D);  -- ERROR
>
>     for I in D'Range loop  -- must use a LOOP
>         V(I) := sin(D(I));
>     end loop;
>        end foo3;
> --------------------
>
> I can ofcourse overload sin, hide the loop inside my own sine
> and then do
>
>    V:= myPackage.sin(D);
>
> All of this is possible in Ada.

Right. And this is what you are supposed to do.

Note that in any case the Sin you are calling is not specially known to the 
compiler, so there is no benefit to having the compiler vectorize it: it 
will end up writing the loop anyway. (And hardware implementations of Sin 
generally do not meet Ada's checking and accuracy requirements, so it's hard 
to use them even if the compiler could recognize them.)

> It is just a little (alot?) more effort compared to what is out there.

Writing functions is what you do in Ada. And Ada is always about 
readability, including for those who aren't familar with your code.

Things like Matlab have a very different purpose than Ada and of course are 
going to be easier to write. Ease of writing is not now and never will be a 
goal for Ada!

                                            Randy.






^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
  2012-07-10  9:21           ` Nasser M. Abbasi
  2012-07-10  9:26             ` Nasser M. Abbasi
  2012-07-10  9:50             ` Dmitry A. Kazakov
@ 2012-07-20  1:56             ` Randy Brukardt
  2012-07-20 21:49               ` Adam Beneschan
  2 siblings, 1 reply; 81+ messages in thread
From: Randy Brukardt @ 2012-07-20  1:56 UTC (permalink / raw)



"Nasser M. Abbasi" <nma@12000.org> wrote in message 
news:jtgs6g$833$1@speranza.aioe.org...
...
> Just like when I read a paper, I first look for the definitions
> of symbols and terms used by the author. one does not have to
> guess. Everything is defined. To do these in Ada, Ada will
> define them as it wishes. As long as there are there, and
> well defined, no one will complain. All of these issue have
> been solved in many other languages.

We try very hard in Ada not to define things that will be confusing to 
casual readers. If you have to look up the definition of how something works 
in Ada, we've pretty much failed that.

For instance, there has been a long controversy about using unary "+" as a 
short-hand conversion operator. That was it's original intent, but efforts 
to add such operators to packages like Ada.Strings.Unbounded have failed 
mainly because some people think that the usage
     Unb_String := +"string";
would be confusing to casual Ada users. (It's common in my code, but I have 
to define the "+" myself.)

So it's not at all clear that the sort of operators you are suggesting are a 
good idea.

                                                 Randy.







^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
  2012-07-20  1:56             ` Randy Brukardt
@ 2012-07-20 21:49               ` Adam Beneschan
  0 siblings, 0 replies; 81+ messages in thread
From: Adam Beneschan @ 2012-07-20 21:49 UTC (permalink / raw)


On Thursday, July 19, 2012 6:56:20 PM UTC-7, Randy Brukardt wrote:

> 
> For instance, there has been a long controversy about using unary "+" as a 
> short-hand conversion operator. That was its original intent

It was?  Interesting.  I thought it was just there for "completeness" and because other languages did the same thing--specifically, I think Pascal and Algol have unary *non-overloadable* "+" operators that don't really serve any purpose except for mathematical types to put into their alternating Taylor series computations or something:

   if mod(N,2) = 0 then Term := +(X**2)/Factorial(N)
                   else Term := -(X**2)/Factorial(N);

But my memory about those other languages could be wrong.  Anyway, if overloading was the reason unary "+" was kept in Ada 83, that's an interesting fact I wouldn't have guessed. 

                          -- Adam



^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
  2012-07-12  7:12             ` Dmitry A. Kazakov
@ 2012-07-29 13:39               ` Robin Vowels
  2012-07-29 14:22                 ` Dmitry A. Kazakov
  0 siblings, 1 reply; 81+ messages in thread
From: Robin Vowels @ 2012-07-29 13:39 UTC (permalink / raw)


On Jul 12, 5:12 pm, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
wrote:
> On Wed, 11 Jul 2012 17:31:33 -0700 (PDT), rrrrrrr@gmail.com wrote:
> >> It is ill-defined. E.g. exp(A), where A is a matrix. Is exp(A) a matrix of
> >> exponents or exponent matrix?
>
> > In languages that provide whole array operations (i.e.,
> > element-by-element operations -- such as PL/I and Fortran),
> > it is the former.
>
> > BTW., I think you mean matrix exponential, which is a far
> > less common operation than exp(A) or e**A(i) for i = 1 to n,
>
> In linear algebra, provided matrices mean matrices, per-element operation
> just does not make any sense.

Element-by-element operations are required routinely in numerical
work.  They have been demanded and have been available since
at least 1955.

> Exp(A), as well as power series are fairly
> common in spectral analysis.
>
> > and would be written MATRIX_EXPONENTIAL or some such,
> > just as matrix multiplication would be written MATRIX_MULT
> > or some such, to distinguish it from the more common
> > element-by-element product.
>
> My FORTRAN and PL/1 are quite rusty, but even these incredibly poor
> languages did not define multiplication for matrices that way.

PL/I defined multiplication for matrices as an element-by-element
product, as I said before.
Back then, FORTRAN did not offer such operations on matrices,
however it now does, and has done so since Fortran 90.

> In fact they
> just had no matrices last time I used either.

Back then, both PL/I and FORTRAN offered matrices (and still do).
For that matter, both offered multi-dimensional arrays (and still do).

> I remember one library for sparse matrices in FORTRAN-IV, doing LU
> decomposition and other stuff. It was quite fun. The library was in fact
> very well-designed, but since FORTRAN-IV lacked even elementary data types,

FORTRAN always has had elementary data types.

> they did all memory management required using INTEGER*4 as an index in one
> huge REAL*4 array, serving as a memory pool.

That's because FORTRAN IV did not have dynamic arrays.
PL/I did, of course, from the first compilers in c. 1966.



^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
  2012-07-20  1:51         ` Randy Brukardt
@ 2012-07-29 13:53           ` Robin Vowels
  2012-07-29 15:51             ` J-P. Rosen
  0 siblings, 1 reply; 81+ messages in thread
From: Robin Vowels @ 2012-07-29 13:53 UTC (permalink / raw)


On Jul 20, 11:51 am, "Randy Brukardt" <ra...@rrsoftware.com> wrote:
> "Nasser M. Abbasi" <n...@12000.org> wrote in messagenews:jtgpos$20g$2@speranza.aioe.org...
>
>
>
>
>
>
>
>
>
> > On 7/10/2012 3:02 AM, gautier.de.montmol...@gmail.com wrote:
> >> Le mardi 10 juillet 2012 07:22:58 UTC+2, Ada novice a crit :
>
> >>> Believe me, for someone who do numerical computations on a daily basis,
> >>> an operation
> >>like V**2 is a NECESSITY. It&#39;s a pity that Ada does not offer such a
> >>facility.
> >>I do not blame those who stick to Matlab (or Fortran?) for doing numerical
> >>computations.
>
> >> Wait... There are *such* facilities and several built-in operations
> >> *like* that:
> >>http://www.ada-auth.org/standards/12rm/html/RM-G-3-1.html
>
> >> Apparently it is not enough. But it is possible to make a proposal for
> >>  the next standard (and short term for the next GNAT version).
>
> > That is a start. But not enough by any means. All functions should be
> > vectorized. Even user defined ones.
>
> That would be a complete change in philosophy for Ada. The expectation is
> that everything that can reasonably be done in libraries will be done that
> way. Note that Ada didn't add advanced containers directly in the syntax.
>
> It might be reasonable to propose to add additional operators to the Ada
> language to make this more possible. We did look at that for Ada 2005 and
> decided not to do it, mainly because most of the interesting operators are
> not unary or binary. (Think integration and differentiation.)
>
> > In Ada, I can't even take the sin of a vector
>
> Of course you can, with a trivial loop.
>
>      for E of Vect loop
>            E := Sin(E);
>      end loop;

Somewhat long-winded, wouldn't you say, when in Fortran and PL/I
you can write it as
E = sin(E);

> And this has the distinct advantage of making it clear to other readers what
> it is that you are doing.

No, it obfuscates what is happening.
It is as verbose as the old-fashioned way of writing a loop:
do i = 1, n;
   E(i) = sin(E(i))
end do
end the equivalent in PL/I.

> Note that in any case the Sin you are calling is not specially known to the
> compiler,

but in PL/I and Fortran it is.



^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
  2012-07-29 13:39               ` Robin Vowels
@ 2012-07-29 14:22                 ` Dmitry A. Kazakov
  2012-07-29 20:54                   ` glen herrmannsfeldt
  2012-07-30  0:49                   ` Does Ada need elemental functions to make it suitable for scientific work? Robin Vowels
  0 siblings, 2 replies; 81+ messages in thread
From: Dmitry A. Kazakov @ 2012-07-29 14:22 UTC (permalink / raw)


On Sun, 29 Jul 2012 06:39:58 -0700 (PDT), Robin Vowels wrote:

> On Jul 12, 5:12�pm, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
> wrote:
>> On Wed, 11 Jul 2012 17:31:33 -0700 (PDT), rrrrrrr@gmail.com wrote:
>>>> It is ill-defined. E.g. exp(A), where A is a matrix. Is exp(A) a matrix of
>>>> exponents or exponent matrix?
>>
>>> In languages that provide whole array operations (i.e.,
>>> element-by-element operations -- such as PL/I and Fortran),
>>> it is the former.
>>
>>> BTW., I think you mean matrix exponential, which is a far
>>> less common operation than exp(A) or e**A(i) for i = 1 to n,
>>
>> In linear algebra, provided matrices mean matrices, per-element operation
>> just does not make any sense.
> 
> Element-by-element operations are required routinely in numerical
> work.

I wrote specifically about matrices as known in linear algebra.

> PL/I defined multiplication for matrices as an element-by-element
> product, as I said before.

Sorry for PL/1!

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
  2012-07-29 13:53           ` Robin Vowels
@ 2012-07-29 15:51             ` J-P. Rosen
  2012-07-29 16:07               ` Dmitry A. Kazakov
                                 ` (2 more replies)
  0 siblings, 3 replies; 81+ messages in thread
From: J-P. Rosen @ 2012-07-29 15:51 UTC (permalink / raw)


Le 29/07/2012 15:53, Robin Vowels a �crit :
>> Note that in any case the Sin you are calling is not specially known to the
>> > compiler,
> but in PL/I and Fortran it is.
Which implies that you can work only with predefined types.

One of the great benefits of Ada (for floating point types) is the
ability to chose accurracy and range, and let the compiler find the
appropriate underlying hardware type. Most machines have more than float
and double!

-- 
J-P. Rosen
Adalog
2 rue du Docteur Lombard, 92441 Issy-les-Moulineaux CEDEX
Tel: +33 1 45 29 21 52, Fax: +33 1 45 29 25 00
http://www.adalog.fr



^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
  2012-07-29 15:51             ` J-P. Rosen
@ 2012-07-29 16:07               ` Dmitry A. Kazakov
  2012-07-29 20:30                 ` Simon Wright
  2012-07-30  0:53               ` Robin Vowels
  2012-07-30  2:20               ` Shmuel Metz
  2 siblings, 1 reply; 81+ messages in thread
From: Dmitry A. Kazakov @ 2012-07-29 16:07 UTC (permalink / raw)


On Sun, 29 Jul 2012 17:51:25 +0200, J-P. Rosen wrote:

> One of the great benefits of Ada (for floating point types) is the
> ability to chose accurracy and range, and let the compiler find the
> appropriate underlying hardware type. Most machines have more than float
> and double!

Which advantage gets lost when external numeric libraries are used.

Let us hope that growing interest in scientific computations in Ada will
result in natively Ada numeric libraries to appear.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
  2012-07-29 16:07               ` Dmitry A. Kazakov
@ 2012-07-29 20:30                 ` Simon Wright
  2012-07-29 20:59                   ` glen herrmannsfeldt
  2012-07-29 21:44                   ` J-P. Rosen
  0 siblings, 2 replies; 81+ messages in thread
From: Simon Wright @ 2012-07-29 20:30 UTC (permalink / raw)


"Dmitry A. Kazakov" <mailbox@dmitry-kazakov.de> writes:

> On Sun, 29 Jul 2012 17:51:25 +0200, J-P. Rosen wrote:
>
>> One of the great benefits of Ada (for floating point types) is the
>> ability to chose accurracy and range, and let the compiler find the
>> appropriate underlying hardware type. Most machines have more than float
>> and double!

ix86's have 80-bit floats (GNAT's Long_Long_Float); what other
commonly-available extended hardware precisions are there?

> Which advantage gets lost when external numeric libraries are used.

I believe that BLAS & possibly LAPACK can be built for extended
precision. But (usually) aren't.



^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
  2012-07-29 14:22                 ` Dmitry A. Kazakov
@ 2012-07-29 20:54                   ` glen herrmannsfeldt
       [not found]                     ` <apib1897s56dkultqmfl3emvk1os3tfdak@invalid.netcom.com>
  2012-07-30  0:49                   ` Does Ada need elemental functions to make it suitable for scientific work? Robin Vowels
  1 sibling, 1 reply; 81+ messages in thread
From: glen herrmannsfeldt @ 2012-07-29 20:54 UTC (permalink / raw)


In comp.lang.pl1 Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote:

(snip, someone wrote)
>>>>> It is ill-defined. E.g. exp(A), where A is a matrix. 
>>>>> Is exp(A) a matrix of exponents or exponent matrix?

I can see a general purpose (more or less) high-level languages
offering a matrix multiply operator, but this seems a little
less common.

The only language I know of that offers matrix multiply and
not element-by-element multiply is BASIC.

>>>> In languages that provide whole array operations (i.e.,
>>>> element-by-element operations -- such as PL/I and Fortran),
>>>> it is the former.

Fortran now has the MATMUL intrinsic to do matrix multiplication.
The * operator does element by element multiply.

One could ask for a MATEXP intrinsic, though it isn't likely
to be added soon.

-- glen



^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
  2012-07-29 20:30                 ` Simon Wright
@ 2012-07-29 20:59                   ` glen herrmannsfeldt
  2012-07-29 21:44                   ` J-P. Rosen
  1 sibling, 0 replies; 81+ messages in thread
From: glen herrmannsfeldt @ 2012-07-29 20:59 UTC (permalink / raw)


In comp.lang.pl1 Simon Wright <simon@pushface.org> wrote:

(snip)
> ix86's have 80-bit floats (GNAT's Long_Long_Float); what other
> commonly-available extended hardware precisions are there?

IBM S/370 and successors (now z/Architecture), and VAX have a 128
bit floating point type. The hardware instructions might be
done by software emulation for VAX. For S/370, DXR (divide) was
done in software, but sometime in ESA/390 was added to the
architecture.

>> Which advantage gets lost when external numeric libraries are used.

> I believe that BLAS & possibly LAPACK can be built for extended
> precision. But (usually) aren't.

Many routines have to be modified for higher precision, such
as more coefficients for polynomial expansions. It can be done,
but often isn't.

-- glen



^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
  2012-07-29 20:30                 ` Simon Wright
  2012-07-29 20:59                   ` glen herrmannsfeldt
@ 2012-07-29 21:44                   ` J-P. Rosen
  2012-07-29 22:54                     ` Simon Wright
  1 sibling, 1 reply; 81+ messages in thread
From: J-P. Rosen @ 2012-07-29 21:44 UTC (permalink / raw)


Le 29/07/2012 22:30, Simon Wright a �crit :
> ix86's have 80-bit floats (GNAT's Long_Long_Float); what other
> commonly-available extended hardware precisions are there?
Lots of them. See for example http://mrob.com/pub/math/floatformats.html


-- 
J-P. Rosen
Adalog
2 rue du Docteur Lombard, 92441 Issy-les-Moulineaux CEDEX
Tel: +33 1 45 29 21 52, Fax: +33 1 45 29 25 00
http://www.adalog.fr



^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
  2012-07-29 21:44                   ` J-P. Rosen
@ 2012-07-29 22:54                     ` Simon Wright
  0 siblings, 0 replies; 81+ messages in thread
From: Simon Wright @ 2012-07-29 22:54 UTC (permalink / raw)


"J-P. Rosen" <rosen@adalog.fr> writes:

> Le 29/07/2012 22:30, Simon Wright a écrit :
>> ix86's have 80-bit floats (GNAT's Long_Long_Float); what other
>> commonly-available extended hardware precisions are there?
> Lots of them. See for example http://mrob.com/pub/math/floatformats.html

Yes, but - *commonly-available*. And I think we can omit the ones used
in calculators.



^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
  2012-07-29 14:22                 ` Dmitry A. Kazakov
  2012-07-29 20:54                   ` glen herrmannsfeldt
@ 2012-07-30  0:49                   ` Robin Vowels
  1 sibling, 0 replies; 81+ messages in thread
From: Robin Vowels @ 2012-07-30  0:49 UTC (permalink / raw)


On Jul 30, 12:22 am, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
wrote:
> On Sun, 29 Jul 2012 06:39:58 -0700 (PDT), Robin Vowels wrote:
> > On Jul 12, 5:12 pm, "Dmitry A. Kazakov" <mail...@dmitry-kazakov.de>
> > wrote:
> >> On Wed, 11 Jul 2012 17:31:33 -0700 (PDT), rrrr...@gmail.com wrote:
> >>>> It is ill-defined. E.g. exp(A), where A is a matrix. Is exp(A) a matrix of
> >>>> exponents or exponent matrix?
>
> >>> In languages that provide whole array operations (i.e.,
> >>> element-by-element operations -- such as PL/I and Fortran),
> >>> it is the former.
>
> >>> BTW., I think you mean matrix exponential, which is a far
> >>> less common operation than exp(A) or e**A(i) for i = 1 to n,
>
> >> In linear algebra, provided matrices mean matrices, per-element operation
> >> just does not make any sense.
>
> > Element-by-element operations are required routinely in numerical
> > work.
>
> I wrote specifically about matrices as known in linear algebra.
>
> > PL/I defined multiplication for matrices as an element-by-element
> > product, as I said before.
>
> Sorry for PL/1!

Nothing to be sorry about.
Trivial to use from the PL/I library or to write.

Term-by-term multiplication is the more often used
(along with term-by-term add, subtract, and divide).



^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
  2012-07-29 15:51             ` J-P. Rosen
  2012-07-29 16:07               ` Dmitry A. Kazakov
@ 2012-07-30  0:53               ` Robin Vowels
  2012-07-30  2:20               ` Shmuel Metz
  2 siblings, 0 replies; 81+ messages in thread
From: Robin Vowels @ 2012-07-30  0:53 UTC (permalink / raw)


On Jul 30, 1:51 am, "J-P. Rosen" <ro...@adalog.fr> wrote:
> Le 29/07/2012 15:53, Robin Vowels a écrit :>> Note that in any case the Sin you are calling is not specially known to the
> >> > compiler,
> > but in PL/I and Fortran it is.
>
> Which implies that you can work only with predefined types.
>
> One of the great benefits of Ada (for floating point types) is the
> ability to chose accurracy and range,

Which we can do in PL/I and Fortran.

> and let the compiler find the
> appropriate underlying hardware type.

Which is what happens with PL/I.

> Most machines have more than float and double!



^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
  2012-07-29 15:51             ` J-P. Rosen
  2012-07-29 16:07               ` Dmitry A. Kazakov
  2012-07-30  0:53               ` Robin Vowels
@ 2012-07-30  2:20               ` Shmuel Metz
  2 siblings, 0 replies; 81+ messages in thread
From: Shmuel Metz @ 2012-07-30  2:20 UTC (permalink / raw)


In <jv3m5r$k2k$1@dont-email.me>, on 07/29/2012
   at 05:51 PM, "J-P. Rosen" <rosen@adalog.fr> said:

>Which implies that you can work only with predefined types.

Nonsense.

>One of the great benefits of Ada (for floating point types)

Over C and FORTRAN; not over PL/I.

>is the ability to chose accurracy and range,

I don't know of any Ada compiler that can extend the FP range of the
underlying hardware. As for accuracy, PL/I was there before the
original DOD/1 project began.

>and let the compiler find the appropriate underlying hardware 
>type.

A concept picked up from PL/I.

-- 
Shmuel (Seymour J.) Metz, SysProg and JOAT  <http://patriot.net/~shmuel>

Unsolicited bulk E-mail subject to legal action.  I reserve the
right to publicly post or ridicule any abusive E-mail.  Reply to
domain Patriot dot net user shmuel+news to contact me.  Do not
reply to spamtrap@library.lspace.org




^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: Does Ada need elemental functions to make it suitable for scientific work?
       [not found]                     ` <apib1897s56dkultqmfl3emvk1os3tfdak@invalid.netcom.com>
@ 2012-07-30  4:15                       ` glen herrmannsfeldt
       [not found]                         ` <nfhd181tv9u87mcqfb7rgd8lm48ihr9f4r@invalid.netcom.com>
  0 siblings, 1 reply; 81+ messages in thread
From: glen herrmannsfeldt @ 2012-07-30  4:15 UTC (permalink / raw)


In comp.lang.pl1 Dennis Lee Bieber <wlfraed@ix.netcom.com> wrote:

(snip, I wrote)

>> The only language I know of that offers matrix multiply and
>> not element-by-element multiply is BASIC.

I used to use it in the HP TSB2000 systems around 1975.

I believe it traced back to the original Dartmouth versions.

I believe Stanford BASIC also had them, but maybe not in 
the Microsoft decendants.

-- glen



^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: MATRIX MULTIPLICATION
       [not found]                         ` <nfhd181tv9u87mcqfb7rgd8lm48ihr9f4r@invalid.netcom.com>
@ 2012-07-31  8:53                           ` Robin Vowels
  2012-07-31  9:05                             ` Robin Vowels
  0 siblings, 1 reply; 81+ messages in thread
From: Robin Vowels @ 2012-07-31  8:53 UTC (permalink / raw)


On Jul 31, 3:49 am, Dennis Lee Bieber <wlfr...@ix.netcom.com> wrote:

>        http://rosettacode.org/wiki/Matrix_multiplication
>
> seems a mixed bag. BBC BASIC,

The BBC BASIC "version" is a mess.
The declarations are for a  3 × 1 multiplied by a 1 × 3 to give a
3 × 2.
The data appears to be 4 × 2 and a 2 × 3,
while the product displayed is that of a 4 × 3.



^ permalink raw reply	[flat|nested] 81+ messages in thread

* Re: MATRIX MULTIPLICATION
  2012-07-31  8:53                           ` MATRIX MULTIPLICATION Robin Vowels
@ 2012-07-31  9:05                             ` Robin Vowels
  0 siblings, 0 replies; 81+ messages in thread
From: Robin Vowels @ 2012-07-31  9:05 UTC (permalink / raw)


Ignore what I just wrote.
I overlooked that the OP was using default lower bounds,
not lower bound of 1.



^ permalink raw reply	[flat|nested] 81+ messages in thread

end of thread, other threads:[~2012-08-07  7:35 UTC | newest]

Thread overview: 81+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-07-09 23:27 Does Ada need elemental functions to make it suitable for scientific work? Nasser M. Abbasi
     [not found] ` <d78nv7dhf88bqv7hrd9eft231a4h2scs10@invalid.netcom.com>
2012-07-10  4:22   ` Nasser M. Abbasi
2012-07-10 14:26     ` Marco
2012-07-10  4:24 ` gautier_niouzes
2012-07-10  5:22   ` Ada novice
2012-07-10  7:27     ` Dmitry A. Kazakov
2012-07-10  8:06     ` gautier_niouzes
     [not found]     ` <637de084-0e71-4077-a1c5-fc4200cad3cf@googlegroups.com>
2012-07-10  8:39       ` Nasser M. Abbasi
2012-07-10  8:58         ` Ada novice
2012-07-10  9:07         ` Dmitry A. Kazakov
2012-07-10  9:21           ` Nasser M. Abbasi
2012-07-10  9:26             ` Nasser M. Abbasi
2012-07-10  9:50             ` Dmitry A. Kazakov
2012-07-20  1:56             ` Randy Brukardt
2012-07-20 21:49               ` Adam Beneschan
2012-07-12  0:31           ` robin.vowels
2012-07-12  7:12             ` Dmitry A. Kazakov
2012-07-29 13:39               ` Robin Vowels
2012-07-29 14:22                 ` Dmitry A. Kazakov
2012-07-29 20:54                   ` glen herrmannsfeldt
     [not found]                     ` <apib1897s56dkultqmfl3emvk1os3tfdak@invalid.netcom.com>
2012-07-30  4:15                       ` glen herrmannsfeldt
     [not found]                         ` <nfhd181tv9u87mcqfb7rgd8lm48ihr9f4r@invalid.netcom.com>
2012-07-31  8:53                           ` MATRIX MULTIPLICATION Robin Vowels
2012-07-31  9:05                             ` Robin Vowels
2012-07-30  0:49                   ` Does Ada need elemental functions to make it suitable for scientific work? Robin Vowels
2012-07-12  0:22         ` robin.vowels
2012-07-20  1:51         ` Randy Brukardt
2012-07-29 13:53           ` Robin Vowels
2012-07-29 15:51             ` J-P. Rosen
2012-07-29 16:07               ` Dmitry A. Kazakov
2012-07-29 20:30                 ` Simon Wright
2012-07-29 20:59                   ` glen herrmannsfeldt
2012-07-29 21:44                   ` J-P. Rosen
2012-07-29 22:54                     ` Simon Wright
2012-07-30  0:53               ` Robin Vowels
2012-07-30  2:20               ` Shmuel Metz
2012-07-10 12:46       ` Brian Drummond
2012-07-10 11:06 ` Simon Wright
2012-07-10 11:59 ` Georg Bauhaus
2012-07-10 12:20 ` Brian Drummond
2012-07-10 19:52 ` Ada novice
2012-07-11  8:41   ` gautier_niouzes
2012-07-11  9:42     ` Ken Thomas
  -- strict thread matches above, loose matches on Subject: below --
1999-12-16  0:00 Matrix Multiplication Carlisle, Martin
1999-12-16  0:00 ` Howard W. LUDWIG
1999-12-15  0:00 Carlisle, Martin
1999-12-15  0:00 ` Mario Klebsch
1999-12-19  0:00   ` Robert Dewar
1999-12-19  0:00 ` Robert Dewar
1999-12-19  0:00 ` Robert Dewar
1999-12-14  0:00 William Dale
1999-12-14  0:00 ` Gautier
     [not found]   ` <5l1f5s4kck891a2s6o8bhvkirm4q79hm6c@4ax.com>
1999-12-15  0:00     ` Gautier
1999-12-15  0:00       ` Marin D. Condic
1999-12-15  0:00         ` Gautier
1999-12-16  0:00           ` Marin D. Condic
1999-12-27  0:00             ` Jeffrey L Straszheim
1999-12-15  0:00         ` Ted Dennison
1999-12-15  0:00           ` Gautier
1999-12-15  0:00             ` Tucker Taft
1999-12-16  0:00             ` Ted Dennison
1999-12-16  0:00             ` Ted Dennison
1999-12-15  0:00   ` Gautier
1999-12-14  0:00 ` David C. Hoos, Sr.
1999-12-15  0:00 ` Greg Lindahl
1999-12-15  0:00   ` Preben Randhol
1999-12-15  0:00 ` Ted Dennison
1999-12-15  0:00   ` William B. Clodius
1999-12-15  0:00     ` Ted Dennison
1999-12-15  0:00       ` William B. Clodius
1999-12-16  0:00         ` Robert A Duff
1999-12-16  0:00           ` William B. Clodius
1999-12-15  0:00 ` E. Robert Tisdale
     [not found]   ` <3856FD3F.8291A71C@ucar.edu>
1999-12-15  0:00     ` E. Robert Tisdale
1999-12-14  0:00       ` Richard Maine
     [not found] ` <01bf4708$99ef98f0$022a6282@dieppe>
1999-12-15  0:00   ` Gautier
1999-12-15  0:00   ` Robert A Duff
1999-12-15  0:00     ` Marin D. Condic
1999-12-16  0:00     ` Pascal Obry
1999-12-16  0:00       ` Brian Rogoff
1999-12-16  0:00       ` Greg Martin
1999-12-16  0:00     ` Dieter Britz

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox