comp.lang.ada
 help / color / mirror / Atom feed
* In place matrix multiplication?
@ 2003-06-01  0:35 chris.danx
  2003-06-01  9:18 ` Dmitry A. Kazakov
  0 siblings, 1 reply; 4+ messages in thread
From: chris.danx @ 2003-06-01  0:35 UTC (permalink / raw)


Hi,

Does anyone know how to do this?  At first I thought it'd be something 
along the lines of...

    procedure Multiply (A : in out Matrix;
                        B : in     Matrix) is
       T : Scalar;
       Z : Scalar := Create(0);
    begin
       for rCount in Matrix_Size'range loop
          for cCount in Matrix_Size'range loop
             T := Z;
             for iCount in Matrix'range loop
                T := T + A.Mx(rCount, iCount) * B.Mx(iCount, cCount);
             end loop;
             A.Mx(rCount, iCount) := T;
          end loop;
       end loop;
    end Multiply;

But this is obviously wrong.  Is there an algorithm to do this?  I 
googled for it, but haven't found anything yet.  I gather it's possible 
because some libraries do it.  They just don't say how they do so.



Cheers,
Chris




^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: In place matrix multiplication?
  2003-06-01  0:35 In place matrix multiplication? chris.danx
@ 2003-06-01  9:18 ` Dmitry A. Kazakov
  2003-06-01 12:54   ` chris.danx
  0 siblings, 1 reply; 4+ messages in thread
From: Dmitry A. Kazakov @ 2003-06-01  9:18 UTC (permalink / raw)


chris.danx wrote:

> Hi,
> 
> Does anyone know how to do this?  At first I thought it'd be something
> along the lines of...
> 
>     procedure Multiply (A : in out Matrix;
>                         B : in     Matrix) is
>        T : Scalar;
>        Z : Scalar := Create(0);
>     begin
>        for rCount in Matrix_Size'range loop
>           for cCount in Matrix_Size'range loop
>              T := Z;
>              for iCount in Matrix'range loop
>                 T := T + A.Mx(rCount, iCount) * B.Mx(iCount, cCount);
>              end loop;
>              A.Mx(rCount, iCount) := T;
>           end loop;
>        end loop;
>     end Multiply;
> 
> But this is obviously wrong.  Is there an algorithm to do this?  I
> googled for it, but haven't found anything yet.  I gather it's possible
> because some libraries do it.  They just don't say how they do so.

Something like this, I presume:

generic
   -- Some field of matrix elements:
   type Element is private;
   Zero : Element; -- Null of Element's field
   with function "+" (L, R : Element) return Element;
   with function "*" (L, R : Element) return Element;
   -- Some index type:
   type Index is (<>);
   -- Two-dimensional array over the field:
   type Matrix is array (Index range <>, Index range <>) of Element;
procedure Mul (A : in out Matrix; B : Matrix) is
begin
   if (  A'First (1) /= B'First (2)  -- A*B does not exist
      or else
         A'Last (1)  /= B'Last (2)
      or else
         A'First (1) /= A'First (2)  -- Can't store the result
      or else
         A'Last  (1) /= A'Last (2)
      )
   then
      raise Constraint_Error;
   end if;
   declare
      Temp : array (Index range A'Range (2));
      Item : Element;
   begin
      for Row in A'Range (1) loop
         -- Copy the row of A into Temp
         for Column in A'Range (2) loop
            Temp (Column) := A (Row, Column);
         end loop;
         -- Evaluate A*B (Row, *)
         for Column in A'Range (2) loop
            -- Evaluate A*B (Row, Column) in Item
            Item := Zero;
            for I in Temp'Range loop
               Item := Item + Temp (I) * B (I, Column);
            end loop;
            -- Store
            A (Row, Column) := Item;
         end loop;      
      end loop;
   end;
end Mul;

If you do not want to expose Zero, you can replace

-- Evaluate A*B (Row, Column) in Item
...

with a bit more cryptic:

Item := Temp (Temp'First) * B (Temp'First, Column);
if Temp'First /= Temp'Last then -- To avoid exception from 'Succ
   for I in Index'Succ (Temp'First)..Temp'Last loop
      Item := Item + Temp (I) * B (I, Column);
   end loop;
end if;

--
Regards,
Dmitry A. Kazakov
www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: In place matrix multiplication?
  2003-06-01  9:18 ` Dmitry A. Kazakov
@ 2003-06-01 12:54   ` chris.danx
  2003-06-01 14:26     ` Bill Findlay
  0 siblings, 1 reply; 4+ messages in thread
From: chris.danx @ 2003-06-01 12:54 UTC (permalink / raw)


Dmitry A. Kazakov wrote:
> Something like this, I presume:
> 

Thanks Dmitry.

I've simplified it by just using a floating point type, rather than an 
abstraction (the abstraction provided only additional complexity and 
little benefit otherwise).  Now I'm wondering if the cost of the more 
complex code is more expensive than doing an out of place multiplication 
followed by a copy back to the original array.  I guess that'd depend on 
the code and the compiler.  How do I find out which is cheaper in time 
for a given compiler (GNAT)?


Thanks Again,
Chris




^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: In place matrix multiplication?
  2003-06-01 12:54   ` chris.danx
@ 2003-06-01 14:26     ` Bill Findlay
  0 siblings, 0 replies; 4+ messages in thread
From: Bill Findlay @ 2003-06-01 14:26 UTC (permalink / raw)


On 1/6/03 13:54, in article
BxnCa.16707$Mu3.366629@newsfep4-glfd.server.ntli.net, "chris.danx"
<spamoff.danx@ntlworld.com> wrote:

> I've simplified it by just using a floating point type, rather than an
> abstraction (the abstraction provided only additional complexity and
> little benefit otherwise).

Matrix multiplication is useful with matrices having integer elements and
boolean elements (e.g. representing relations, and graph-theory adjacency
and path length).

It would be interesting to know how well versions instantiated with float,
integer and boolean types compare in terms of performance with non-generic
versions, especially using GNAT.

> Now I'm wondering if the cost of the more
> complex code is more expensive than doing an out of place multiplication
> followed by a copy back to the original array.  I guess that'd depend on
> the code and the compiler.  How do I find out which is cheaper in time
> for a given compiler (GNAT)?

For large (square) matrices, in-place lets you run a 50% bigger problem in
the same amount of cache. It also much improves locality of reference.
I would guess that these effects would more than outweigh the extra copying
done by the in-place version. But I could be wrong. You need to run some
tests and see ...

-- 
Bill-Findlay chez blue-yonder.co.uk ("-" => "")

P.S. I hope the exams went well!




^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2003-06-01 14:26 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2003-06-01  0:35 In place matrix multiplication? chris.danx
2003-06-01  9:18 ` Dmitry A. Kazakov
2003-06-01 12:54   ` chris.danx
2003-06-01 14:26     ` Bill Findlay

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox