comp.lang.ada
 help / color / mirror / Atom feed
From: jonathan <johnscpg@googlemail.com>
Subject: Re: Copying rows in a two dimensional array.
Date: Sat, 13 Feb 2010 16:42:13 -0800 (PST)
Date: 2010-02-13T16:42:13-08:00	[thread overview]
Message-ID: <c79a33a3-2b3e-46f1-9abf-d596f2f4b4a9@y33g2000yqb.googlegroups.com> (raw)
In-Reply-To: m4p8kh.376.ln@hunter.axlog.fr

On Feb 2, 8:52 am, Jean-Pierre Rosen <ro...@adalog.fr> wrote:
> Jerry a écrit :> I've never understood why Ada does not allow slicing in
> > multidimensional arrays. What are the safety issues involved? And how
> > is it safe to force the programmer into ad hoc methods?
>
> One-dimensional slices are simple and efficient. Multidimensional slices
> are a can of worms.
>
> I guess you are thinking about rectangular slices. But why stop there? A
> slice comprising the main diagonal and the diagonal above and below can
> be very useful for some calculations. Or a slice which is the triangular
> part of the upper half...
>
> AFAICT, Fortran-99 does provide this - and the syntax is so complicated
> that nobody uses it. And implementation is also a nightmare.
>
> When designing a programming language, you have to stop at some point.
> The ratio (cost of implementation) / usefulness is a good measure for
> this. I think the ratio was simply to high for this feature.
>
> --
> ---------------------------------------------------------
>            J-P. Rosen (ro...@adalog.fr)
> Visit Adalog's web site athttp://www.adalog.fr


I've never felt the need for two dimensional (or higher
dimensional) slicing. It's partly a performance issue: if
you make the data storage matrix as small as possible (ie
make it exactly the same size as the matrix-transformation
you are doing) then you sometimes get a disastrous loss of
efficiency. If on the other hand you always make the data storage
matrix larger than necessary (so that your transformation will
always be performed on a sub-matrix of the data storage matrix),
then you always have the option of avoiding these efficiency
losses. Once you write the transformation routine so that it
operates on sub-matrices of the data storage matrix, then
you usually don't have to slice or copy a sub-matrix from
the data storage matrix in order to transform it.

Here are some Ada and Fortran examples of the
problem on various sized data storage arrays.
I used some bits and pieces of routines from
http://web.am.qub.ac.uk/users/j.parker/miscellany/

First example: we eigen-decompose an N x N = 2048 x 2048 matrix.
The data storage matrix is M x M = (1024+Padding) x (1024+Padding)
Here is the running time in seconds of an iterative jacobi
eigen-decomposition:

   2048x2048:  322 sec, gnat      (Padding=24)
   2048x2048: 1646 sec, gnat      (Padding=0)
   2048x2048: 1632 sec, gfortran  (Padding=0)
   2048x2048: 1492 sec, ifort     (Padding=0)

The observed 500% slowdown in the absence of padded arrays is
unacceptable, even if it is a rare event (occurring only on 2**p sized
data arrays). In fact it's not all that rare ... more comments on
that below.  (BTW, ifort is the INTEL fortran compiler, all
optimizations at Max. gfortran is the gcc variant.)

By the way, I never bothered to write a paddable Fortran routine,
but here are some more timings near a power-of-2 array bounds:

   1023x1023: 33.5 sec, gnat     (Padding=9)
   1023x1023: 42.4 sec, gfortran (Padding=0)
   1023x1023: 37.3 sec, ifort    (Padding=0)

   1022x1022: 33.5 sec, gnat     (Padding=10)
   1022x1022: 30.2 sec, gfortran (Padding=0)
   1022x1022: 28.3 sec, ifort    (Padding=0)

   1024x1024: 33.2 sec, gnat     (Padding=8)
   1024x1024: 96.0 sec, gnat     (Padding=0)
   1024x1024: 116  sec, gfortran (Padding=0)
   1024x1024: 43   sec, ifort    (Padding=0)

There is one puzzle here I don't have time solve.  Normally, a good
fortran will automatically pad the array for you ... I recall that
happening in the past. This time it seems to have slipped its fortran
mind. The ifort man pages:

   -O3 enables "Padding the size of certain power-of-two
   arrays to allow more  efficient cache use."

But I used -O3 and also -O3 -fast ... maybe did something wrong,
but important lesson: compiler optimization policies change
with time, and of course vary from compiler to compiler.
You can't rely on them or even, amazingly, man pages. It's much
better to write the program in a way that is insensitive to changing
optimization policies.

A web search of "cache thrashing" will reveal much depressing
detail on the subject. The efficiency problems discussed above
occur as our arrays become large and spill out of the L3 cache
(6 Meg in the present case).

Just to demonstrate that these problems show up on all sorts
of arrays, I did some runs in the 2000 to 3000 range, this time
using a householder decomposition scavenged from the Golub
singular value decomposition. Can still find plenty of 500%'s:

   2102x2102, 3.93 sec, gnat  (Padding = 0)
   2102x2102, 3.19 sec, gnat  (Padding = 8)

   2176x2176, 5.03 sec, gnat  (Padding = 0)
   2176x2176, 3.42 sec, gnat  (Padding = 8)

   2304x2304, 8.47 sec, gnat  (Padding = 0)
   2304x2304, 4.52 sec, gnat  (Padding = 8)

   2560x2560, 24.1 sec, gnat  (Padding = 0)
   2560x2560, 5.42 sec, gnat  (Padding = 8)

   3072x3072, 38.9 sec, gnat  (Padding = 0)
   3072x3072, 7.76 sec, gnat  (Padding = 8)

   3584x3584, 53.2 sec, gnat  (Padding = 0)
   3584x3584, 11.5 sec, gnat  (Padding = 8)

Finally, an example on 1-dim arrays, using fast fourier
transforms, FFT. The standard, and the most common FFT is
computed on a power-of-2 length data set: 0 .. 2**p-1. I
timed computation of these FFT's on arrays of length  2**p, and
I compared this with computation on arrays of
length  2**p + Padding, where Padding = 24.  The computation
on the padded arrays was faster. The ratio of running times is:

   p = 10  ratio = .93
   p = 11  ratio = .88
   p = 12  ratio = .76
   p = 13  ratio = .79
   p = 14  ratio = .76
   p = 15  ratio = .75
   p = 16  ratio = .77
   p = 17  ratio = .84
   p = 18  ratio = .75
   p = 19  ratio = .45
   p = 20  ratio = .63
   p = 21  ratio = .69
   p = 22  ratio = .69
   p = 22  ratio = .67
   p = 24  ratio = .62
   p = 25  ratio = .62

So the problem is more common here, and smaller. (These
efficiency losses I still consider unacceptable, especially
in a routine whose reason for existence is efficiency.)
The problem is still worse when you take FFTs of two
dimensional arrays.

There is another (and entirely independent) reason
I prefer routines that perform their transformations on
arbitrary sub-matrices (or on arbitrary diagonal blocks)
of the data storage matrix. After writing my 1st linear
algebra routine, I was very pleased with myself, but it
didn't really do what I wanted.  I realized I needed to
transform the diagonal sub-blocks of a large matrix,
and do it iteratively on arbitrarily sized diagonal
sub-blocks. It was a very simple matter to modify the code to
do this, and the result was so convenient that I've never
considered doing it otherwise.

Jonathan



  parent reply	other threads:[~2010-02-14  0:42 UTC|newest]

Thread overview: 48+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-02-01  2:11 Copying rows in a two dimensional array Peter C. Chapin
2010-02-01  4:42 ` Jeffrey R. Carter
2010-02-01  6:55 ` Niklas Holsti
2010-02-01 23:36   ` Peter C. Chapin
2010-02-04  4:27   ` Hibou57 (Yannick Duchêne)
2010-02-01  8:37 ` Dmitry A. Kazakov
2010-02-02  0:11   ` Randy Brukardt
2010-02-07 16:13     ` Robert A Duff
2010-02-08  6:30       ` tmoran
2010-02-08 13:15         ` Robert A Duff
2010-02-08 13:45           ` Dmitry A. Kazakov
2010-02-08 21:20             ` Robert A Duff
2010-02-08 23:26               ` (see below)
2010-02-09  0:36                 ` Randy Brukardt
2010-02-09  1:03                   ` (see below)
2010-02-09  7:11                   ` Pascal Obry
2010-02-09  8:14                     ` AdaMagica
2010-02-09 14:33                 ` Robert A Duff
2010-02-09  1:05               ` Adam Beneschan
2010-02-09 14:45                 ` Robert A Duff
2010-02-09 18:50                   ` tmoran
2010-02-09 19:51                   ` Pascal Obry
2010-02-09 23:03                     ` Robert A Duff
2010-02-08 18:53           ` tmoran
2010-02-08 21:14             ` Robert A Duff
2010-02-08 21:29               ` Pascal Obry
2010-02-09  8:56                 ` Jean-Pierre Rosen
2010-02-09  9:14                   ` AdaMagica
2010-02-09 11:19                     ` Jean-Pierre Rosen
2010-02-09 14:26                 ` Robert A Duff
2010-02-09  6:34               ` tmoran
2010-02-09 14:29                 ` Robert A Duff
2010-02-09 18:49                   ` tmoran
2010-02-09 22:58                     ` Robert A Duff
2010-02-01 22:10 ` Jerry
2010-02-02  0:07   ` Randy Brukardt
2010-02-02  8:52   ` Jean-Pierre Rosen
2010-02-02 22:23     ` Jerry
2010-02-03  1:24       ` Adam Beneschan
2010-02-04  4:42     ` Hibou57 (Yannick Duchêne)
2010-02-14  0:42     ` jonathan [this message]
2010-02-14  1:54       ` Hibou57 (Yannick Duchêne)
2010-02-14 16:16         ` jonathan
2010-03-22  8:56           ` Ole-Hjalmar Kristensen
2010-02-16  6:51     ` David Thompson
2010-02-04  4:13 ` Hibou57 (Yannick Duchêne)
2010-02-04  9:10   ` Dmitry A. Kazakov
2010-02-04  9:23     ` Hibou57 (Yannick Duchêne)
replies disabled

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox