comp.lang.ada
 help / color / mirror / Atom feed
* Larger matrices
@ 2008-08-06 13:32 amado.alves
  2008-08-06 14:29 ` Georg Bauhaus
  2008-08-07 12:35 ` Alex R. Mosteo
  0 siblings, 2 replies; 41+ messages in thread
From: amado.alves @ 2008-08-06 13:32 UTC (permalink / raw)


I manage to process 1000 x 1000 matrices with the respective Ada 2005
standard library with GNAT GPL 2008. Processing includes
multiplication by a vector.

Now I want larger matrices, say 5000 x 5000. The objects seem to be
allocated ok (dynamically), but mutiplication gives a "segmentation
fault."

Any tips on how to overcome this?

Thanks a lot.



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-06 13:32 Larger matrices amado.alves
@ 2008-08-06 14:29 ` Georg Bauhaus
  2008-08-06 15:01   ` amado.alves
  2008-08-07 12:35 ` Alex R. Mosteo
  1 sibling, 1 reply; 41+ messages in thread
From: Georg Bauhaus @ 2008-08-06 14:29 UTC (permalink / raw)


amado.alves@gmail.com schrieb:

> Now I want larger matrices, say 5000 x 5000. The objects seem to be
> allocated ok (dynamically), but mutiplication gives a "segmentation
> fault."
> 
> Any tips on how to overcome this?

Have you tried having gnatmake recompile the library
units (-a,-f,-s) with, say, -g, -O, -gnata, -gnato,
and -fstack-check?  Might yield a better diagnostic.


--
Georg Bauhaus
Y A Time Drain  http://www.9toX.de



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-06 14:29 ` Georg Bauhaus
@ 2008-08-06 15:01   ` amado.alves
  2008-08-06 17:29     ` amado.alves
  0 siblings, 1 reply; 41+ messages in thread
From: amado.alves @ 2008-08-06 15:01 UTC (permalink / raw)


> Have you tried having gnatmake recompile the library
> units (-a,-f,-s) with, say, -g, -O, -gnata, -gnato,
> and -fstack-check?  Might yield a better diagnostic. (Bauhaus)

I'll try that and let you all know. Thanks a lot for this great first
orientation in the miriad set of compiler options.



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-06 15:01   ` amado.alves
@ 2008-08-06 17:29     ` amado.alves
  2008-08-06 17:58       ` Dmitry A. Kazakov
  2008-08-06 18:44       ` Jeffrey R. Carter
  0 siblings, 2 replies; 41+ messages in thread
From: amado.alves @ 2008-08-06 17:29 UTC (permalink / raw)


It's a Storage_Error with the message "stack overflow detected".

My program does not use the stack, so probably the multiplication is
implemented as a function with parameters passed on the stack and this
overflows, like it does if I trie to create the large arrays as static
objects.

Next I will try to increase the stack size, but I bet I'm gonna hit
the 2G ceiling :-(

(Recently I saw memory pointers being defined as 32-bit integers.)



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-06 17:29     ` amado.alves
@ 2008-08-06 17:58       ` Dmitry A. Kazakov
  2008-08-06 18:40         ` amado.alves
  2008-08-06 18:44       ` Jeffrey R. Carter
  1 sibling, 1 reply; 41+ messages in thread
From: Dmitry A. Kazakov @ 2008-08-06 17:58 UTC (permalink / raw)


On Wed, 6 Aug 2008 10:29:09 -0700 (PDT), amado.alves@gmail.com wrote:

> It's a Storage_Error with the message "stack overflow detected".
> 
> My program does not use the stack, so probably the multiplication is
> implemented as a function with parameters passed on the stack and this
> overflows, like it does if I trie to create the large arrays as static
> objects.

IMO, for such huge matrices and vectors, it is better to use in-place
operations, rather than a loose functional programming style.

And do you have dense 5000 x 5000 matrices? You are a hardworking man!
(:-))

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-06 17:58       ` Dmitry A. Kazakov
@ 2008-08-06 18:40         ` amado.alves
  2008-08-07  7:44           ` Dmitry A. Kazakov
  0 siblings, 1 reply; 41+ messages in thread
From: amado.alves @ 2008-08-06 18:40 UTC (permalink / raw)


> IMO, for such huge matrices and vectors, it is better to use in-place
> operations, rather than a loose functional programming style.

You mean a procedure with out parameter instead of function? Is there
such utility?

> And do you have dense 5000 x 5000 matrices? You are a hardworking man!

It's an hypertext. It starts out sparse, but then gets less sparse
with the usage because it is an adaptive hypertext system and I am
using spreading activation to create new links. Probably it will never
be very dense, but new links may appear anywhere in the matrix. And
it's only a prototype for a thesis, and the focus is on the
theoretical model not the implementation, so I was prepared to spend a
few megas of my giga ram to use the standard library if only the
compiler let me do it...

I found switches -fstack-usage and -fcallgraph-ifo (gcc) in the GNAT
manual but Gnatmake does not seem to accept this.

I also found that GNAT is calling a BLAS or LAPACK function called
"gemv" to implement matrix-vector "*" and so my diagnostics of
overflow caused by (value?) passing on the stack is probably right.

So now I want to increase the stack size. To 200M. The right Gnatmake
switch, please...



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-06 17:29     ` amado.alves
  2008-08-06 17:58       ` Dmitry A. Kazakov
@ 2008-08-06 18:44       ` Jeffrey R. Carter
  2008-08-06 19:12         ` amado.alves
  1 sibling, 1 reply; 41+ messages in thread
From: Jeffrey R. Carter @ 2008-08-06 18:44 UTC (permalink / raw)


amado.alves@gmail.com wrote:
> It's a Storage_Error with the message "stack overflow detected".
> 
> My program does not use the stack, so probably the multiplication is
> implemented as a function with parameters passed on the stack and this
> overflows, like it does if I trie to create the large arrays as static
> objects.

Probably the operation declares a result object on the stack, which causes the 
stack overflow. GNAT is pretty good about not passing large objects by copy.

You may need to create your own operations that use the heap rather than the stack.

-- 
Jeff Carter
"Sheriff murdered, crops burned, stores looted,
people stampeded, and cattle raped."
Blazing Saddles
35



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-06 18:44       ` Jeffrey R. Carter
@ 2008-08-06 19:12         ` amado.alves
  2008-08-06 23:33           ` amado.alves
  0 siblings, 1 reply; 41+ messages in thread
From: amado.alves @ 2008-08-06 19:12 UTC (permalink / raw)


> Probably the operation declares a result object on the stack, which causes the
> stack overflow. GNAT is pretty good about not passing large objects by copy.

Yes, I inspected a little, the "*" function is entered well, it's the
BLAS/LAPACK call that explodes.

> You may need to create your own operations that use the heap rather than the stack.

Because nobody did this yet?
Increase the stack size?



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-06 19:12         ` amado.alves
@ 2008-08-06 23:33           ` amado.alves
  2008-08-07  3:02             ` Randy Brukardt
                               ` (2 more replies)
  0 siblings, 3 replies; 41+ messages in thread
From: amado.alves @ 2008-08-06 23:33 UTC (permalink / raw)


Upon failing miserably to convince GNAT to use a stack of 200M, I am
currently working around the problem by writing my own matrix-by-
vector multiplication function. It takes 2 seconds for a 5000 x 5000
matrix, but it does not explode.

I continue interested in ways to augment the stack size (or another
way) to enable use of the standard library. Not so much for speed, but
more for validation (GNAT passes ACATS right? Are ACATS for this
library setup yet?)

Thanks a lot.



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-06 23:33           ` amado.alves
@ 2008-08-07  3:02             ` Randy Brukardt
  2008-08-07  6:30             ` Georg Bauhaus
  2008-08-07 11:28             ` johnscpg
  2 siblings, 0 replies; 41+ messages in thread
From: Randy Brukardt @ 2008-08-07  3:02 UTC (permalink / raw)


<amado.alves@gmail.com> wrote in message 
news:afa48ae2-cee6-41d6-9390-5cd6b12bdfcf@y21g2000hsf.googlegroups.com...
> Upon failing miserably to convince GNAT to use a stack of 200M, I am
> currently working around the problem by writing my own matrix-by-
> vector multiplication function. It takes 2 seconds for a 5000 x 5000
> matrix, but it does not explode.
>
> I continue interested in ways to augment the stack size (or another
> way) to enable use of the standard library. Not so much for speed, but
> more for validation (GNAT passes ACATS right? Are ACATS for this
> library setup yet?)

No, no ACATS for this (or any of the new packages,  for that matter) yet. 
Since packages are relatively easy to implement correctly, I've been 
concentrating the new tests on the new features (like interfaces and null 
exclusions and limited returning functions) that it is much more likely to 
get wrong. I've also paid no attention at all to the non-manditory annexes. 
So that's two strikes on these packages.

If someone wants to submit ACATS-style tests for any of the new packages, 
they'd be much appreciated. (There surely needs to be at least some minimal 
tests just to ensure that they exist.)

                                    Randy.





^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-06 23:33           ` amado.alves
  2008-08-07  3:02             ` Randy Brukardt
@ 2008-08-07  6:30             ` Georg Bauhaus
  2008-08-07  8:01               ` amado.alves
  2008-08-07 11:28             ` johnscpg
  2 siblings, 1 reply; 41+ messages in thread
From: Georg Bauhaus @ 2008-08-07  6:30 UTC (permalink / raw)


amado.alves@gmail.com wrote:

> I continue interested in ways to augment the stack size (or another
> way) to enable use of the standard library. Not so much for speed, but
> more for validation (GNAT passes ACATS right? Are ACATS for this
> library setup yet?)

Have you tried using ulimit or some such?



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-06 18:40         ` amado.alves
@ 2008-08-07  7:44           ` Dmitry A. Kazakov
  0 siblings, 0 replies; 41+ messages in thread
From: Dmitry A. Kazakov @ 2008-08-07  7:44 UTC (permalink / raw)


On Wed, 6 Aug 2008 11:40:09 -0700 (PDT), amado.alves@gmail.com wrote:

>> IMO, for such huge matrices and vectors, it is better to use in-place
>> operations, rather than a loose functional programming style.
> 
> You mean a procedure with out parameter instead of function?

in out. For example:

procedure Multiply
   (Accumulator : in out Real_Matrix; Multiplicand : Real_Matrix);

> Is there such utility?

You mean in the standard library. Unfortunately no, which limits its
usability. But you could implement them, as well as frequent mixed cases
like:

   A := A * B + C

>> And do you have dense 5000 x 5000 matrices? You are a hardworking man!
> 
> It's an hypertext.

An incidence matrix? I believe there exist special methods for
representation and handling them. But sorry, my numerical methods is quite
outdated.

> So now I want to increase the stack size. To 200M.

Ooch, that is a bad, bad idea. Stack is not for that. It should take no
longer than 5 min to implement Multiply in order that it would use O(n) of
pool instead of O(n**2) of stack...

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-07  6:30             ` Georg Bauhaus
@ 2008-08-07  8:01               ` amado.alves
  2008-08-07  8:55                 ` Egil Høvik
  2008-08-07 19:13                 ` Jeffrey R. Carter
  0 siblings, 2 replies; 41+ messages in thread
From: amado.alves @ 2008-08-07  8:01 UTC (permalink / raw)


I tried a few tricks for stack size but failed. I did not find clear
documentation. Still interested.

In the meanwhile, I did a modification of the library.
It is interesting.
For matrix by vector multiplication, we have the useful property that

| A1 | x B = | A1 x B |
| A2 |       | A2 x B |
|... |       | ...    |

so I am using this property to feed the BLAS function piecemeal.

/*
And Ada got in the way: slicing restricted to one-dimensional arrays!
La donna è mobile!
*/



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-07  8:01               ` amado.alves
@ 2008-08-07  8:55                 ` Egil Høvik
  2008-08-07 19:13                 ` Jeffrey R. Carter
  1 sibling, 0 replies; 41+ messages in thread
From: Egil Høvik @ 2008-08-07  8:55 UTC (permalink / raw)


In Gnat, the default stack size (for the environmental task, and tasks
without a specified Storage_Size) is 2MB (at least for the targets I
have looked at). If you're using version 5.04 or higher, you can tell
gnatbind to use a different default stack size:

 -dnn[k|m] Default primary stack size = nn [kilo|mega] bytes
 -Dnn[k|m] Default secondary stack size = nn [kilo|mega] bytes


if you're using gnatmake, take a look at  -bargs  for how to specify
binder options


--
~egilhh



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-06 23:33           ` amado.alves
  2008-08-07  3:02             ` Randy Brukardt
  2008-08-07  6:30             ` Georg Bauhaus
@ 2008-08-07 11:28             ` johnscpg
  2 siblings, 0 replies; 41+ messages in thread
From: johnscpg @ 2008-08-07 11:28 UTC (permalink / raw)




amado.al...@gmail.com wrote:
> Upon failing miserably to convince GNAT to use a stack of 200M, I am
> currently working around the problem by writing my own matrix-by-
> vector multiplication function. It takes 2 seconds for a 5000 x 5000
> matrix, but it does not explode.
>
> I continue interested in ways to augment the stack size (or another
> way) to enable use of the standard library. Not so much for speed, but
> more for validation (GNAT passes ACATS right? Are ACATS for this
> library setup yet?)
>
> Thanks a lot.

I'm not sure if this relevant to your problem, but when I get
segmentation fault
in linux, its usually due to insufficient stack mem in the shell I'm
in.
Read up on: ulimit  for bash shell. I use bash, and the command is
 ulimit -s <some number of Megs>.  I get lazy and type

 ulimit -s unlimited

 For csh I seem to recall its: limit stacksize unlimited.

    in BASH:
      ulimit -s unlimited
      (type "ulimit -s" to see stack setting.  The "s" is for stack.)
      (type "ulimit -a" to see all settings.)

    in CSH try:
      limit stacksize unlimited
      limit datasize unlimited
      (type "limit" to see settings.)

jonathan



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-06 13:32 Larger matrices amado.alves
  2008-08-06 14:29 ` Georg Bauhaus
@ 2008-08-07 12:35 ` Alex R. Mosteo
  2008-08-07 13:40   ` amado.alves
  1 sibling, 1 reply; 41+ messages in thread
From: Alex R. Mosteo @ 2008-08-07 12:35 UTC (permalink / raw)


amado.alves@gmail.com wrote:

> I manage to process 1000 x 1000 matrices with the respective Ada 2005
> standard library with GNAT GPL 2008. Processing includes
> multiplication by a vector.
> 
> Now I want larger matrices, say 5000 x 5000. The objects seem to be
> allocated ok (dynamically), but mutiplication gives a "segmentation
> fault."
> 
> Any tips on how to overcome this?

I will give you all the tips I remember on how to get larger stacks (in linux),
but I'm going from memory so you'll have to double-check:

ulimit -s unlimited at the shell level.

Wrap your program, if not already, in a task to be able to use pragma
Storage_Size for this task. I see in other posts that there are switches in
order to set the stack size of the environment task; make your choice.

The linker settings are the fuzziest part. This old thread can put you in the
good track:

http://groups.google.com/group/comp.lang.ada/browse_thread/thread/73c865853bdc4937

In particular the first reply mentions arguments to the linker that ring a bell
to me, although I'm sure to have seen at least an alternate suggestion for a
set of linker options dealing with the stack.



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-07 12:35 ` Alex R. Mosteo
@ 2008-08-07 13:40   ` amado.alves
  2008-08-07 15:12     ` Alex R. Mosteo
  0 siblings, 1 reply; 41+ messages in thread
From: amado.alves @ 2008-08-07 13:40 UTC (permalink / raw)


Thanks but nothing of this works.
Modifying the library now.



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-07 13:40   ` amado.alves
@ 2008-08-07 15:12     ` Alex R. Mosteo
  2008-08-07 16:25       ` amado.alves
  0 siblings, 1 reply; 41+ messages in thread
From: Alex R. Mosteo @ 2008-08-07 15:12 UTC (permalink / raw)


amado.alves@gmail.com wrote:

> Thanks but nothing of this works.
> Modifying the library now.

I was able to raise the stack size using these methods, but maybe you're
hitting some other wall. Have you tried with some simple testcase to see if
you're getting a larger stack at all? I seem to remember that some linker
settings were failing silently for someone, until it got it right.



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-07 15:12     ` Alex R. Mosteo
@ 2008-08-07 16:25       ` amado.alves
  2008-08-07 18:21         ` amado.alves
  0 siblings, 1 reply; 41+ messages in thread
From: amado.alves @ 2008-08-07 16:25 UTC (permalink / raw)


> I was able to raise the stack size using these methods, but maybe you're
> hitting some other wall. Have you tried with some simple testcase to see if
> you're getting a larger stack at all? I seem to remember that some linker
> settings were failing silently for someone, until it got it right.

Some other time maybe I will try that. Currently it is too risky. I am
pressed. Thanks.



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-07 16:25       ` amado.alves
@ 2008-08-07 18:21         ` amado.alves
  0 siblings, 0 replies; 41+ messages in thread
From: amado.alves @ 2008-08-07 18:21 UTC (permalink / raw)


Despite the risk, I gave it another try (at stack size), because
something was smelly.
It turns out that my system must be broken badly!
Programs with tasks explode with the message "Illegal instrution".
Even the simplest program

procedure Simplest is
   task type Subtask_T;
   task body Subtask_T is
   begin
      null;
   end;
   Subtask : Subtask_T;
begin
   null;
end;

Tomorrow in the office I'll try a different computer.



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-07  8:01               ` amado.alves
  2008-08-07  8:55                 ` Egil Høvik
@ 2008-08-07 19:13                 ` Jeffrey R. Carter
  2008-08-08  9:59                   ` amado.alves
  1 sibling, 1 reply; 41+ messages in thread
From: Jeffrey R. Carter @ 2008-08-07 19:13 UTC (permalink / raw)


amado.alves@gmail.com wrote:
> 
> And Ada got in the way: slicing restricted to one-dimensional arrays!

Compared to other languages that have no slicing at all?

-- 
Jeff Carter
"To Err is human, to really screw up, you need C++!"
St�phane Richard
63



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-07 19:13                 ` Jeffrey R. Carter
@ 2008-08-08  9:59                   ` amado.alves
  2008-08-08 10:38                     ` Dmitry A. Kazakov
                                       ` (2 more replies)
  0 siblings, 3 replies; 41+ messages in thread
From: amado.alves @ 2008-08-08  9:59 UTC (permalink / raw)


> > And Ada got in the way: slicing restricted to one-dimensional arrays!
>
> Compared to other languages that have no slicing at all?

There are languages better than Ada at array indexing, including
slicing. If there aren't, there should be!

A bidimensional array like the standard Real_Matrix should be
compatible with a unidimensional array like Real_Vector.
One row (or one column) of a matrix is a vector.
In fact aggregate notation reflects this

Matrix := ( (a, b, c) ,
            (d, e, f) ,
            (g, h, i) );

Vector := (x, y, x);

but then the obvious are illegal:

Vector := Matrix (1); -- (a, b, c)

or

Band_Matrix := Matrix (1 .. 2); -- ( (a, b, c) ,
                                --   (d, e, f) )

A good language should even permit

Vertical_Band_Matrix := Matrix (Matrix'Range, 1 .. 2); -- ( (a, b) ,
                                                            (d, e) ,
                                                       --   (g, h) )
or

Block := Matrix (1 .. 2, 1 .. 2); -- ( (a, b) ,
                                  --   (d, e) )



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-08  9:59                   ` amado.alves
@ 2008-08-08 10:38                     ` Dmitry A. Kazakov
  2008-08-08 11:29                     ` Jean-Pierre Rosen
  2008-08-08 11:35                     ` Georg Bauhaus
  2 siblings, 0 replies; 41+ messages in thread
From: Dmitry A. Kazakov @ 2008-08-08 10:38 UTC (permalink / raw)


On Fri, 8 Aug 2008 02:59:56 -0700 (PDT), amado.alves@gmail.com wrote:

>>> And Ada got in the way: slicing restricted to one-dimensional arrays!
>>
>> Compared to other languages that have no slicing at all?
> 
> There are languages better than Ada at array indexing, including
> slicing. If there aren't, there should be!
> 
> A bidimensional array like the standard Real_Matrix should be
> compatible with a unidimensional array like Real_Vector.
> One row (or one column) of a matrix is a vector.
> In fact aggregate notation reflects this
> 
> Matrix := ( (a, b, c) ,
>             (d, e, f) ,
>             (g, h, i) );
> 
> Vector := (x, y, x);
> 
> but then the obvious are illegal:
> 
> Vector := Matrix (1); -- (a, b, c)
>
> or
> 
> Band_Matrix := Matrix (1 .. 2); -- ( (a, b, c) ,
>                                 --   (d, e, f) )
> 
> A good language should even permit
> 
> Vertical_Band_Matrix := Matrix (Matrix'Range, 1 .. 2); -- ( (a, b) ,
>                                                             (d, e) ,
>                                                        --   (g, h) )
> or
> 
> Block := Matrix (1 .. 2, 1 .. 2); -- ( (a, b) ,
>                                   --   (d, e) )

A good language should support keyed notation of array indices:

Vector := Matrix (Column => 1);  -- (a, d, g)

   type Matrix is array (Row, Column : Positive range <>) of Float;

It also should provide index types:

   type Square is index (Row, Column : Positive range <>);
   type Matrix is array (Square) of Float;

and

   type Matrix is array (Row, Column : Positive range <>) of Float;
   subtype Square is Matrix'Index;

and iteration

   for I in M'Index loop -- Zeroes all matrix
      M (I) := 0.0;
   end loop;

and enumeration

   case A'Index is
      when (1,1) | (2,2) => ...
      when others => ...
   end case;

and array aggregates

   ((1,1)=>0.0, (1,2)=>...); -- Flat notation

The value of an index type is a tuple. Elements of the tuple are ranges and
individual indices.

All operations you mentioned above are compositions of the corresponding
slicing of the index type and then indexing the array by the index. The
result of indexing depends on the index type:

   A (1, 2) -- Element, dimension 0
   A (1..1, 2) -- Vector, dimension 1
   A (1..1, 2..2) -- Matrix, dimension 2

There also has to be range types and range values.

Last but not least, a good language should allow reasonable array renaming
with bounds slicing. Array renaming in Ada is bogus.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-08  9:59                   ` amado.alves
  2008-08-08 10:38                     ` Dmitry A. Kazakov
@ 2008-08-08 11:29                     ` Jean-Pierre Rosen
  2008-08-08 13:15                       ` Jeffrey Creem
  2008-08-08 11:35                     ` Georg Bauhaus
  2 siblings, 1 reply; 41+ messages in thread
From: Jean-Pierre Rosen @ 2008-08-08 11:29 UTC (permalink / raw)


amado.alves@gmail.com a �crit :
>>> And Ada got in the way: slicing restricted to one-dimensional arrays!
>> Compared to other languages that have no slicing at all?
> 
> There are languages better than Ada at array indexing, including
> slicing. If there aren't, there should be!
> 
AFAIK, Fortran90 has very sophisticated slicing of arrays.
The only thing I heard about it, is that it was so complicated to define 
and use that it was a major reason why there are so few Fortran90 
compilers...

Language design is about balancing features, usefulness, and 
implementability.
-- 
---------------------------------------------------------
            J-P. Rosen (rosen@adalog.fr)
Visit Adalog's web site at http://www.adalog.fr



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-08  9:59                   ` amado.alves
  2008-08-08 10:38                     ` Dmitry A. Kazakov
  2008-08-08 11:29                     ` Jean-Pierre Rosen
@ 2008-08-08 11:35                     ` Georg Bauhaus
  2008-08-08 12:11                       ` Dmitry A. Kazakov
  2 siblings, 1 reply; 41+ messages in thread
From: Georg Bauhaus @ 2008-08-08 11:35 UTC (permalink / raw)


amado.alves@gmail.com schrieb:
>>> And Ada got in the way: slicing restricted to one-dimensional arrays!
>> Compared to other languages that have no slicing at all?
> 
> There are languages better than Ada at array indexing, including
> slicing. If there aren't, there should be!

Ada is a systems programming language.  As such, it is
targetting digital computers. In cases where Ada can be used
for some mathematical tasks, it is still a systems programming
language.

You could just ask for a type system that allows
assignment to diagonals of matrices, upper triangles,
switch from row order to column order right in the
type system and so on. (And, I guess, per DK, have these
type based mechanisms be construction facilities for
the programmer, with default implementations only).

Ada is about computers that have word/byte adressable
storage cells that happen to be able to store
bit combinations. Ada is not about cells of mathematical
abstractions.  Overlap is accidental.


If you do already have some theory for your graphs,
and presuming you need a working program for large
matrices, I'd suggest that you seriously consider prototyping
your algorithms using one of the fine PLs that do have the
required mathematical stuff built in.   (The various APLs, some
of which are specializing or certain large data sets, come
to mind; Mathematica, and so on.)  You can always concentrate
on an specially efficient Ada implementation later.

Ada is a systems programming language for digital computer
systems performing close-to-the-hardware tasks, not
mathematics.


--
Georg Bauhaus
Y A Time Drain  http://www.9toX.de



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-08 11:35                     ` Georg Bauhaus
@ 2008-08-08 12:11                       ` Dmitry A. Kazakov
  2008-08-08 14:11                         ` Georg Bauhaus
  0 siblings, 1 reply; 41+ messages in thread
From: Dmitry A. Kazakov @ 2008-08-08 12:11 UTC (permalink / raw)


On Fri, 08 Aug 2008 13:35:04 +0200, Georg Bauhaus wrote:

> amado.alves@gmail.com schrieb:
>>>> And Ada got in the way: slicing restricted to one-dimensional arrays!
>>> Compared to other languages that have no slicing at all?
>> 
>> There are languages better than Ada at array indexing, including
>> slicing. If there aren't, there should be!
> 
> Ada is a systems programming language.  As such, it is
> targetting digital computers. In cases where Ada can be used
> for some mathematical tasks, it is still a systems programming
> language.

What?! Ada is a universal purpose language.

> Ada is about computers that have word/byte adressable
> storage cells that happen to be able to store
> bit combinations. Ada is not about cells of mathematical
> abstractions.  Overlap is accidental.

Submatrices can be handled efficiently [in-place] using interleaving
techniques.

But an efficient or hardware-close implementation was certainly not the
concern of the Ada.Numerics.Generic_Real_Arrays design. Otherwise it would
not use functions at all.

> If you do already have some theory for your graphs,
> and presuming you need a working program for large
> matrices, I'd suggest that you seriously consider prototyping
> your algorithms using one of the fine PLs that do have the
> required mathematical stuff built in.

Really? 90% of numeric libraries and applications is written in FORTRAN. Do
you think they have prototyped them in 60's? I guess that pencil and paper
is what they used. After all maths is best prototyped in math.

The major challenges numerical applications have is not math, but squeezing
most out of the hardware, in terms of memory, time, accuracy, problem size.
None of this can be "prototyped."

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-08 11:29                     ` Jean-Pierre Rosen
@ 2008-08-08 13:15                       ` Jeffrey Creem
  2008-08-08 13:32                         ` Dmitry A. Kazakov
  0 siblings, 1 reply; 41+ messages in thread
From: Jeffrey Creem @ 2008-08-08 13:15 UTC (permalink / raw)


Jean-Pierre Rosen wrote:
> amado.alves@gmail.com a �crit :
>>>> And Ada got in the way: slicing restricted to one-dimensional arrays!
>>> Compared to other languages that have no slicing at all?
>>
>> There are languages better than Ada at array indexing, including
>> slicing. If there aren't, there should be!
>>
> AFAIK, Fortran90 has very sophisticated slicing of arrays.
> The only thing I heard about it, is that it was so complicated to define 
> and use that it was a major reason why there are so few Fortran90 
> compilers...
> 
> Language design is about balancing features, usefulness, and 
> implementability.

It is also fair to say that the vast majority of the 
programmers/software engineers in the world do a pretty lousy job of 
even using the features they have been given. Putting in a lot of effort 
into some fancy feature that no one uses when that same effort could be 
used to improve reliability, or efficiency, or the IDE, or standard 
libraries is an important consideration.

I am not quite a crusty old programmer yet but I am getting there. I 
have seen a lot of code in a lot of languages from lots of developers, 
organizations, companies, etc. There are always plenty of counter 
examples but the vast majority of the code in the world (including Ada 
code) either ignores or misuses existing programming languages features 
that could/should be used to improve reliability, performance, 
maintainability, etc.

Of course there is a chance that I am just a bad programmer and that is 
why I think that...But since I am getting paid to be a sw engineer and 
guiding/influencing development of plenty of engineers, its fair to say 
that even if I am wrong, then I must be right.





^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-08 13:15                       ` Jeffrey Creem
@ 2008-08-08 13:32                         ` Dmitry A. Kazakov
  0 siblings, 0 replies; 41+ messages in thread
From: Dmitry A. Kazakov @ 2008-08-08 13:32 UTC (permalink / raw)


On Fri, 08 Aug 2008 13:15:00 GMT, Jeffrey Creem wrote:

> It is also fair to say that the vast majority of the 
> programmers/software engineers in the world do a pretty lousy job of 
> even using the features they have been given. Putting in a lot of effort 
> into some fancy feature that no one uses when that same effort could be 
> used to improve reliability, or efficiency, or the IDE, or standard 
> libraries is an important consideration.

This is true in general, but IMO wrong for arrays.

I believe that the underlying concepts of matrix and index are simpler than
the partial implementation provided by Ada. Restrictions put on slices, and
built-in ranges impose artificial constraints and require additional
efforts from the layman programmer.

Far more important is that this had a huge impact on the design of the
standard container library, making it sufficiently harder to use than it
could be, if there were provided container views in terms of matrices and
indices. The container library is intended for *daily* use.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-08 12:11                       ` Dmitry A. Kazakov
@ 2008-08-08 14:11                         ` Georg Bauhaus
  2008-08-08 14:36                           ` Dmitry A. Kazakov
  0 siblings, 1 reply; 41+ messages in thread
From: Georg Bauhaus @ 2008-08-08 14:11 UTC (permalink / raw)


Dmitry A. Kazakov schrieb:
> On Fri, 08 Aug 2008 13:35:04 +0200, Georg Bauhaus wrote:


>> Ada is a systems programming language. 

> What?! Ada is a universal purpose language.

Assembly language is universal, too, as are Lisp and Prolog with FFI,
and so on.  This notion of "purpose" is not very specific.
Let me put is this way: Ada has to be especially good at
systems programming, hence it has to be fairly low level.



> But an efficient or hardware-close implementation was certainly not the
> concern of the Ada.Numerics.Generic_Real_Arrays design. Otherwise it would
> not use functions at all.

What hardware?  Assume data flow hardware, and assume a way
to put the function result in the input box of the next processing unit.
What now?


>>  I'd suggest that you seriously consider prototyping
>> your algorithms using one of the fine PLs that do have the
>> required mathematical stuff built in.
> 
> Really? 90% of numeric libraries and applications is written in FORTRAN.

Only a fraction of the tabular arrays of numbers
is input to procedures of challenging numeric libraries.
I'll even guess that a larger fraction of matrices in
lots of Ada programs are 3x3, maybe 4x4.


> Do
> you think they have prototyped them in 60's?
[The libraries, or the algorithms? The latter, yes; APL is about
that old.]

I don't see how prototyping a hypertext graph algorithm
requires a maximally efficient implementation of matrix
computations.  A production system may require increased
efficiency. At that stage you have to pay attention to
the tiny bits, possibly redesigning the algorithm.
(If I could have stopped at simple recursive
functions when writing the program below it would have been
a trivial exercise. :)

Georg Bauhaus
Y A Time Drain  http://www.9toX.de



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-08 14:11                         ` Georg Bauhaus
@ 2008-08-08 14:36                           ` Dmitry A. Kazakov
  2008-08-08 15:40                             ` Georg Bauhaus
  0 siblings, 1 reply; 41+ messages in thread
From: Dmitry A. Kazakov @ 2008-08-08 14:36 UTC (permalink / raw)


On Fri, 08 Aug 2008 16:11:57 +0200, Georg Bauhaus wrote:

> Dmitry A. Kazakov schrieb:
>> On Fri, 08 Aug 2008 13:35:04 +0200, Georg Bauhaus wrote:
> 
>>> Ada is a systems programming language. 
> 
>> What?! Ada is a universal purpose language.
> 
> Assembly language is universal,

It is Turing complete.

> too, as are Lisp and Prolog with FFI,
> and so on.

They are not. Lisp is a list-oriented language, Prolog is a logical
inference language.

> This notion of "purpose" is not very specific.
> Let me put is this way: Ada has to be especially good at
> systems programming, hence it has to be fairly low level.

Wrong. Where that follows from? Systems programming need not to be
low-level. It has to have abstractions close to ones typical for "systems."
There is no obvious connection between these. For example, Ada offers the
notion of protected action in order represent a systems-domain specific
concept of interrupt. It is a fairly high-level abstraction.

>> But an efficient or hardware-close implementation was certainly not the
>> concern of the Ada.Numerics.Generic_Real_Arrays design. Otherwise it would
>> not use functions at all.
> 
> What hardware?

Vector processors.

> Assume data flow hardware, and assume a way
> to put the function result in the input box of the next processing unit.
> What now?

Nothing, Ada.Numerics.Generic_Real_Arrays was not designed in order to
support this hardware. Which is the point. Ada is not low-level, and its
design goals are not focused solely on efficiency, but on usability,
portability and maintainability.

> I don't see how prototyping a hypertext graph algorithm
> requires a maximally efficient implementation of matrix
> computations.

Because of the problem size. Incidence matrices grow as O(n**2), i.e.
extremely fast.

> A production system may require increased
> efficiency. At that stage you have to pay attention to
> the tiny bits, possibly redesigning the algorithm.

So, what was prototyped then? Numerics is all about algorithms. Changing
matrix representation has a huge impact on the implementation of the
operations. Technically it means that you have to re-write everything.

This BTW is exactly the problem OP has. He has "prototyped" the thing using
Ada.Numerics.Generic_Real_Arrays. Now, he faces the problem that the
prototype does not scale due to stack overflow. The consequence is a need
in full redesign. Ergo, prototyping was time wasting.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-08 14:36                           ` Dmitry A. Kazakov
@ 2008-08-08 15:40                             ` Georg Bauhaus
  2008-08-08 16:37                               ` Dmitry A. Kazakov
  0 siblings, 1 reply; 41+ messages in thread
From: Georg Bauhaus @ 2008-08-08 15:40 UTC (permalink / raw)


Dmitry A. Kazakov schrieb:

> They are not. Lisp is a list-oriented language, Prolog is a logical
> inference language.

Lisp and Prolog with FFI(!) are not universal? Come on.


>> This notion of "purpose" is not very specific.
>> Let me put is this way: Ada has to be especially good at
>> systems programming, hence it has to be fairly low level.
> 
> Wrong. Where that follows from? Systems programming need not to be
> low-level.

A systems programming language must provide for the low
level.  A language that does not provide for the low level
is not a systems programming language.  Ada has many features
that provide for low level programming.  It lacks a numer of
features used in higher level programming (E.g. function
environments, unbounded numbers, ...).  Systems programming
is not the same as low level programming or high level programming;
rather, the set of objects is typically not as abstract as
some mathematical N-array of numbers.  The set of objects in
a systems programming language is more like a real N-array
of numbers: storage cells of well defined meaning, with just
one layer of abstraction above the storage cells.

>> What hardware?
> 
> Vector processors.

OK, I wasn't aware that the LA packages were made for vector
processors only.


>> Assume data flow hardware, and assume a way
>> to put the function result in the input box of the next processing unit.
>> What now?
> 
> Nothing, Ada.Numerics.Generic_Real_Arrays was not designed in order to
> support this hardware.

Aha?   I see that GNAT delegates to Fortran libraries. Traditionally,
Fortran is certainly an associate of non-PC hardware.


> Which is the point. Ada is not low-level,

Ada is fairly low-level.


>> I don't see how prototyping a hypertext graph algorithm
>> requires a maximally efficient implementation of matrix
>> computations.
> 
> Because of the problem size. Incidence matrices grow as O(n**2), i.e.
> extremely fast.

O(N**2) is not extremely fast; many non-extreme algorithms
are in this class.  The case discussed hits some memory barrier
at n = 5_000 and that's it. We get elaborate array indexing support
in some array programming languages.  If chosing one of those
PLs costs me a constant factor of 10, I get all the indexing
stuff in return, it seems worth the cost during prototyping.



>> A production system may require increased
>> efficiency. At that stage you have to pay attention to
>> the tiny bits, possibly redesigning the algorithm.
> 
> Numerics is all about algorithms.

Numerics is about numbers (or about numeric control, depending).


> This BTW is exactly the problem OP has. He has "prototyped" the thing using
> Ada.Numerics.Generic_Real_Arrays. Now, he faces the problem that the
> prototype does not scale due to stack overflow. The consequence is a need
> in full redesign. Ergo, prototyping was time wasting.

Nice rhetoric logic error.


--
Georg Bauhaus
Y A Time Drain  http://www.9toX.de



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-08 15:40                             ` Georg Bauhaus
@ 2008-08-08 16:37                               ` Dmitry A. Kazakov
  2008-08-08 17:37                                 ` Georg Bauhaus
  0 siblings, 1 reply; 41+ messages in thread
From: Dmitry A. Kazakov @ 2008-08-08 16:37 UTC (permalink / raw)


On Fri, 08 Aug 2008 17:40:16 +0200, Georg Bauhaus wrote:

> Dmitry A. Kazakov schrieb:
> 
>> They are not. Lisp is a list-oriented language, Prolog is a logical
>> inference language.
> 
> Lisp and Prolog with FFI(!) are not universal? Come on.

Of course they are not. Both are domain-specific languages.

>>> This notion of "purpose" is not very specific.
>>> Let me put is this way: Ada has to be especially good at
>>> systems programming, hence it has to be fairly low level.
>> 
>> Wrong. Where that follows from? Systems programming need not to be
>> low-level.
> 
> A systems programming language must provide for the low
> level.

Nope, it must provide systems programming domain abstractions. Ada does
this. This does not make it low-level.

It seems that you are confusing different concepts of abstraction. Level
depends on where is the ground, the underlying computational environment.
This is not necessarily the hardware, but it can be one.

1. Considering hardware as the environment. Ada running on an Ada-hardware
(if such existed) would make this implementation of Ada low-level. But it
would not make Ada low-level, because it also runs on the hardware
requiring much work to create an Ada compiler. Therefore Ada in general is
high level relatively to the hardware.

2. Considering programming paradigms as the environment, i.e. the terms in
which programs are thought an composed. Ada is again very high level, as it
supports up to 3rd set abstractions:

   value -> type (sets of) -> class / generic (sets of)

OO decomposition, concurrency etc.

> A language that does not provide for the low level
> is not a systems programming language.

A logical fallacy: A => B does not imply A = B.

> Ada has many features
> that provide for low level programming.  It lacks a numer of
> features used in higher level programming (E.g. function
> environments, unbounded numbers, ...).

Since when unbounded numbers became high level? Consider it used to
implement modular arithmetic or ASCII characters. How low-level!

> Systems programming
> is not the same as low level programming or high level programming;
> rather, the set of objects is typically not as abstract as
> some mathematical N-array of numbers.

I don't see how interrupt, task or I/O port are less abstract than array.

>>> What hardware?
>> 
>> Vector processors.
> 
> OK, I wasn't aware that the LA packages were made for vector
> processors only.

Take Intel x86 instead, if you don't like vector processor.

>>> Assume data flow hardware, and assume a way
>>> to put the function result in the input box of the next processing unit.
>>> What now?
>> 
>> Nothing, Ada.Numerics.Generic_Real_Arrays was not designed in order to
>> support this hardware.
> 
> Aha?   I see that GNAT delegates to Fortran libraries. Traditionally,
> Fortran is certainly an associate of non-PC hardware.

That was not the point, it was as it reads, the design did not target any
hardware and will be *relatively* inefficient on any existing hardware.
Relatively, because it most likely will beat both Lisp and Prolog.

>>> I don't see how prototyping a hypertext graph algorithm
>>> requires a maximally efficient implementation of matrix
>>> computations.
>> 
>> Because of the problem size. Incidence matrices grow as O(n**2), i.e.
>> extremely fast.
> 
> O(N**2) is not extremely fast; many non-extreme algorithms
> are in this class.  The case discussed hits some memory barrier
> at n = 5_000 and that's it. We get elaborate array indexing support
> in some array programming languages.  If chosing one of those
> PLs costs me a constant factor of 10, I get all the indexing
> stuff in return, it seems worth the cost during prototyping.

No chance. O(N**2) is only memory complexity. You should also consider the
number of operations required per element. Naive matrix multiplication is
O(N**3). Factor 10 means (N x 10)**3 thousand times slower! And this is
only the lower bound. But all this is in order learn that the fancy
language X is greatly slower than Ada and is totally unsuitable for
production code? I know it in advance!

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-08 16:37                               ` Dmitry A. Kazakov
@ 2008-08-08 17:37                                 ` Georg Bauhaus
  2008-08-08 17:42                                   ` Georg Bauhaus
  2008-08-08 19:51                                   ` Dmitry A. Kazakov
  0 siblings, 2 replies; 41+ messages in thread
From: Georg Bauhaus @ 2008-08-08 17:37 UTC (permalink / raw)


Dmitry A. Kazakov schrieb:

>> A systems programming language must provide for the low
>> level.
> 
> Nope, it must provide systems programming domain abstractions. Ada does
> this. This does not make it low-level.

The systems programming domain abstractions are fairly
low level. Please, fairly.  Yes, there are some higher level
abstractions, like the whole of tasking is.  But tasking is
far from describing the entirety of Ada.   You can have the same
tasking offered with much higher level PLs.


> 1. Considering hardware as the environment. Ada running on an Ada-hardware
> (if such existed) [...]
>
> 2. Considering programming paradigms as the environment, i.e. the terms in
> which programs are thought an composed. Ada is again very high level, as it
> supports up to 3rd set abstractions:
> 
>    value -> type (sets of) -> class / generic (sets of)

Wow, "value -> type (set of)" sorts assembly languages
just one tiny bit below a type safe generic O-O language: imul / fmul.

No constructionist shuffling of artificial categorization frameworks
will convince me that Ada is not a systems programming language.
C++ is one, too.  (For another example, Erlang is reportedly good
if helped by a FFI.)  And Ada is a rich set of fairly low-level
features.  (I am not away of an 'Address mechanism in ML (without FFI).)

There is no EVAL or COMPILE in Ada.
There is no goal directed evaluation.
There is no function environment (no upward closures).
There are no generators with suspension.
There are no comprehensions, no infinite data structurs.
There is no symbol manipulation, no expression macros, no inference.
There is no higher order anything.
There are no solvers.
There are no graphics primitives etc..

There are good elementary data types. And then no data structures
other than those that the programmer defines using the find
construction primitives.

Tasks and POs are built into the typesystem. Great for systems
programming, and also for higher level things like ... pfhhh,
whatever ...
But does a shared variable mechanism lift Ada to the level of
languages such as APL (which tends to have it, too)?



>> A language that does not provide for the low level
>> is not a systems programming language.
> 
> A logical fallacy: A => B does not imply A = B.

No wonder, you stopped after the first premise ;-)



> No chance. O(N**2) is only memory complexity. You should also consider the
> number of operations required per element. Naive matrix multiplication is
> O(N**3). Factor 10 means (N x 10)**3 thousand times slower! And this is
> only the lower bound. But all this is in order learn that the fancy
> language X is greatly slower than Ada and is totally unsuitable for
> production code? I know it in advance!
> 

You can write an algorithm operating on matrices
in O(Absurdly High Degree).  The original question was about
vector product where one operand is a row of a matrix.

The specific problem seems to make it desirable to be able
to say,

                 5
     3 .. 14 by  .
                 16

You get the idea.  Other algorithms may require more elaborate
subset of matrix cells.  Use a language that allows you to say,
using *builtin* operators, things like

   "Give me all blocks of neighbours no more than two edges apart."

That's high level.



-- Georg



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-08 17:37                                 ` Georg Bauhaus
@ 2008-08-08 17:42                                   ` Georg Bauhaus
  2008-08-08 19:51                                   ` Dmitry A. Kazakov
  1 sibling, 0 replies; 41+ messages in thread
From: Georg Bauhaus @ 2008-08-08 17:42 UTC (permalink / raw)


Georg Bauhaus schrieb:

> features.  (I am not away of an 'Address mechanism in ML (without FFI).)
                        aware


> There are good elementary data types. And then no data structures
> other than those that the programmer defines using the find
                                                          fine
> construction primitives.



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-08 17:37                                 ` Georg Bauhaus
  2008-08-08 17:42                                   ` Georg Bauhaus
@ 2008-08-08 19:51                                   ` Dmitry A. Kazakov
  2008-08-09  7:44                                     ` Georg Bauhaus
  1 sibling, 1 reply; 41+ messages in thread
From: Dmitry A. Kazakov @ 2008-08-08 19:51 UTC (permalink / raw)


On Fri, 08 Aug 2008 19:37:15 +0200, Georg Bauhaus wrote:

> Dmitry A. Kazakov schrieb:
> 
>>> A systems programming language must provide for the low
>>> level.
>> 
>> Nope, it must provide systems programming domain abstractions. Ada does
>> this. This does not make it low-level.
> 
> The systems programming domain abstractions are fairly
> low level.

Why? Low relatively to what?

> But tasking is
> far from describing the entirety of Ada.   You can have the same
> tasking offered with much higher level PLs.

Synchronization mechanisms of higher level than Ada? I am excited. You mean
semaphores, I bet. Please, show us that fancy language and also how tasking
there is integrated into other "higher-level" language features.

>> 1. Considering hardware as the environment. Ada running on an Ada-hardware
>> (if such existed) [...]
>>
>> 2. Considering programming paradigms as the environment, i.e. the terms in
>> which programs are thought an composed. Ada is again very high level, as it
>> supports up to 3rd set abstractions:
>> 
>>    value -> type (sets of) -> class / generic (sets of)
> 
> Wow, "value -> type (set of)" sorts assembly languages
> just one tiny bit below a type safe generic O-O language: imul / fmul.

Assembler does not have user-defined types. In the hierarchy above all
elements are user-defined. Assembler has only user-defined values. I.e. it
is only at 1st set abstraction.

> There is no EVAL or COMPILE in Ada.

Assembler has them.

> There is no goal directed evaluation.

I saw this term only in the context of low-level image processing
primitives. Sorry, it rings wrong bells.

> There is no function environment (no upward closures).

Procedural composition is itself low-level. Ada need not to worry about
that low-level stuff.

> There are no generators with suspension.

Hydropneumatic suspension? Sorry.

> There are no comprehensions, no infinite data structurs.

These structures are inconsistent.

> There is no symbol manipulation, no expression macros, no inference.

Symbol manipulation is text processing, no problem in Ada. Macros is
low-level source text processing. We don't need that.

Elaborated inference is inconsistent with maintainability if not low-level.
Namely, if inferred things are understandable, they are trivial. If they
are non-trivial (as in Prolog), then nobody can predict the program's
behavior.

My favorite example is AI. Healthy man and woman need no PhD degree in
order to create an intelligent system. That is a case of "inference." And
it does the job, alas with a totally unpredictable outcome. Why in your
opinion do people write tons [of mostly useless] scientific articles on the
AI issue? They want to have it imperative!

Furthermore, complexity of inference is unrelated to the abstraction level.
There exist both very complex and very low-level things.

> There is no higher order anything.

?

> There are no solvers.

Because Ada programs solve non-trivial problems. Anyway, declarative is not
a synonym of higher level. "Fly to Mars" is higher level than "I like
cheese."

> There are no graphics primitives etc..

Turtle graphics must be of an extremely high level...

You are making an error equating domain-specific to high level. It is
usually reverse. Domain-specific languages are of very low level, because
they translate straightforwardly into the engine, which plays the role of
the hardware. This is low-level. Note that due to the limitation of the
application area and because domain-specific abstractions are quite rough
and irregular, such languages usually do not provide any higher level
abstractions. For example, OpenGL is of 1st order. You cannot define new
graphic primitives. You cannot bundle graphic primitives into generic sets
of, etc. Describe a set of tables sharing some property in SQL. Declare
class of rhombus-like inheritance graphs in UML.

In terms of abstraction level, most of domain-specific languages stop where
FORTRAN-IV began. That's why there is no 5GL, 6GL, 7GL... The idea was
wrong.

> But does a shared variable mechanism lift Ada to the level of
> languages such as APL (which tends to have it, too)?

I don't know its newer versions. Initially it was very low-level, 1st set
abstraction, in the classification above.

>> No chance. O(N**2) is only memory complexity. You should also consider the
>> number of operations required per element. Naive matrix multiplication is
>> O(N**3). Factor 10 means (N x 10)**3 thousand times slower! And this is
>> only the lower bound. But all this is in order learn that the fancy
>> language X is greatly slower than Ada and is totally unsuitable for
>> production code? I know it in advance!
>> 
> 
> You can write an algorithm operating on matrices
> in O(Absurdly High Degree).  The original question was about
> vector product where one operand is a row of a matrix.

It was matrix multiplication, I guess.
 
> Use a language that allows you to say,
> using *builtin* operators, things like
> 
>    "Give me all blocks of neighbours no more than two edges apart."
> 
> That's high level.

   procedure Give_Me_All_Blocks...;

This is not high level, and I know the price. Domain-specific languages are
usable in domain-specific cases ... and unusable universally.

In my area of professional interest we are forced to fight with various
domain-specific languages on the daily basis. I have reasons to dislike
them.

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-08 19:51                                   ` Dmitry A. Kazakov
@ 2008-08-09  7:44                                     ` Georg Bauhaus
  2008-08-09 10:33                                       ` Dmitry A. Kazakov
  0 siblings, 1 reply; 41+ messages in thread
From: Georg Bauhaus @ 2008-08-09  7:44 UTC (permalink / raw)


Dmitry A. Kazakov wrote:

> Synchronization mechanisms of higher level than Ada? I am excited. You mean
> semaphores, I bet.

I don't mean semaphores. You bet.
And I said, have tasking offered with much higher level PLs.
That said, is there any direct reflection of the theory of channels
in Ada?
Is pragma No_Return a high level solution for "possibly dying
remote computation"?
Look at how they currently work around distributed tasking as criticized
by Liskov many years ago: It seems to have become a more pressing
issue-- or can be bought and sold, at least.



>> There is no EVAL or COMPILE in Ada.
> 
> Assembler has them.

Assembly language does not, of course, *have* EVAL or COMPILE
instructions.  Sure, you can GOTO EVAL, and have self modifying
code at random.  That's always interesting, but I'm sure it doesn't
convince people that assembly language is therefore high level.


>> There is no goal directed evaluation.
> 
> I saw this term only in the context of low-level image processing
> primitives. Sorry, it rings wrong bells.

The goal finding operators, e.g. assignment, of a language execute
until the expression delivers a value suitable in context (the goal
is met).
The expression involves the same type of operators, recursively.
You can make your own. Example language is Icon.
So it has backtracking built in at the expression level, i.e. not at the
roll your own level.


>> There is no function environment (no upward closures).
> 
> Procedural composition is itself low-level. Ada need not to worry about
> that low-level stuff.

O.k, function composition is not the same as a some kind of type
composition that includes the primitive opearations.
Sure.   That is a different tree, does only grow on Qi land
or on some recent extension of Haskell.

(I'd say that functionalists tend to fall victim to what
they call "fusion". Many also happily ignore controlling digital
computers.  But they call the laws of functional composition high
level, as everyone else does.)


>> There are no generators with suspension.
> 
> Hydropneumatic suspension? Sorry.

suspend/resume on expressions. Helps with non-deterministic or
arbitrary length data structures, execute on demand (lazy), etc.
You can program this in Ada by roll-your-own.


>> There is no symbol manipulation, no expression macros, no inference.
> 
> Symbol manipulation is text processing, no problem in Ada. Macros is
> low-level source text processing. We don't need that.

The point is, you can do things with Lisp macros the you cannot
do otherwise.  Are you saying that because you can do without the
power of a Lisp macro system in your programs, these macros must
therefore be part of low level languages for everyone else?


> Elaborated inference is inconsistent with maintainability if not low-level.
> Namely, if inferred things are understandable, they are trivial. If they
> are non-trivial (as in Prolog), then nobody can predict the program's
> behavior.

Well, you do not *want* to predict the program's behavior.
You want to see possible solutions, that's all.


>> There are no solvers.
> 
> Because Ada programs solve non-trivial problems. Anyway, declarative is not
> a synonym of higher level.

"Declarative" may be hyped, but by all definitions I know,
"declarative" certainly fits "higher level".  Anyway, you can
combine solvers to solve non-trivial problems using combination
operators, where each solver contributes to the solution.
No Ada style roll-your-own-solvers involved.

Another example: Throw a problem decomposition into the tuple space and
wait for it to solve the problem in some way.  Maybe stepwise.


>> There are no graphics primitives etc..
> 
> Turtle graphics must be of an extremely high level...

To me graphics primitives seem higher level than a load of
Bitmap_View.Set_Pixel(Our_Turtle.Next_Y,
                      Our_Turtle.Next_X, Color => Red);

I want to be able to say, "draw a bar chart using this set of data",
*without* programming.
Using R, to pick an approximation to high level graphics operators,
it is exactly what I do, *without* further graphics systems programming.
That's high level. (And, BTW, applicable in many domains.)


> You are making an error equating domain-specific to high level.

Like the rest of humanity continues to err by using more or less
established language. :-)

Almost all programming tasks *are* domain-specific.
High level operators from high level language just
ease the construction of many domain-specific programs. That is,
of almost all programs.  At the cost of lesser control of the hardware.
No systems programming style involved when computing "Vec . Vec"
in a high level language that supports arrays at a higher level
than does Ada.
Systems programmers revert to whatever level they can get, ADTs
for a start, and "patterns" for much of the remaining solutions.

I'm *not* saying that there is anything inherently superior in
using higher level operators and such!
Array operators are just operating at a higher level than single
array cell manipulation will do.  So, typically, Ada programmers
with rely on a call to some roll-your-own "*" for arrays.

If I had wanted to say, "high level type system", where high level
refers to properties such as thread of control, the order of
possible calls established by the acceptor protocol, and so on,
I had said so.


> It is
> usually reverse. Domain-specific languages are of very low level, because
> they translate straightforwardly into the engine, which plays the role of
> the hardware. This is low-level.

This is just two solution layers close to each other, and both at a high
level. By a not uncommon definition, at least.  If I say,
 Result <- M1 {times} M2
and the parallel vector APL engine translates this into a smart
distributed algorithm,  and the result arrives within time bounds,
and the operator {times} with all the magic is *builtin*, and
is portable, and adapts to the execution environment, I call that
high level.

>  Describe a set of tables sharing some property in SQL.

A high level PL expression would be

 "make X persist",

provided this expression alone achieves persistence *without*
further programming.
No database systems programming needed such as invoking SOCI,
ODBC, writing a user defined ADT for Embedded Snorkle or whatever.



> In terms of abstraction level, most of domain-specific languages stop where
> FORTRAN-IV began. That's why there is no 5GL, 6GL, 7GL... The idea was
> wrong.

So their idea was wrong (where the 5GL is usually though of as logic
and constraints), yours are right?  Might be, but could you please chose
a good set of fresh words and avoid speaking of "high level languages"
when you are referring to something else?


>> Use a language that allows you to say,
>> using *builtin* operators, things like
>>
>>    "Give me all blocks of neighbours no more than two edges apart."
>>
>> That's high level.
> 
>    procedure Give_Me_All_Blocks...;
> 
> This is not high level, and I know the price.

Exactly, the Ada example is lower level, and requires
an implementation subprogram.
Ada doesn't offer the high level given in the phrase above.


> Domain-specific languages are
> usable in domain-specific cases ... and unusable universally.

Most high-level languages are demonstrably usable universally.
They need not be restricted to specific cases in some domain.
And certainly, SQL is not representative of the higher level languages.





^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-09  7:44                                     ` Georg Bauhaus
@ 2008-08-09 10:33                                       ` Dmitry A. Kazakov
  2008-08-11 11:51                                         ` amado.alves
  0 siblings, 1 reply; 41+ messages in thread
From: Dmitry A. Kazakov @ 2008-08-09 10:33 UTC (permalink / raw)


On Sat, 09 Aug 2008 09:44:55 +0200, Georg Bauhaus wrote:

> Dmitry A. Kazakov wrote:
> 
> That said, is there any direct reflection of the theory of channels
> in Ada?

Channel is high level? Come on.

> Is pragma No_Return a high level solution for "possibly dying
> remote computation"?

RPC is low-level, but it is a different issue. Concurrent /= remote
computing.

> Look at how they currently work around distributed tasking as criticized
> by Liskov many years ago: It seems to have become a more pressing
> issue-- or can be bought and sold, at least.

Where are those higher level abstractions of tasking?

>>> There is no EVAL or COMPILE in Ada.
>> 
>> Assembler has them.
> 
> Assembly language does not, of course, *have* EVAL or COMPILE
> instructions.

   MOV @-(PC), @-(PC)

if I correctly remember PDP-11. It writes itself into the memory before the
current instruction and then executes it. How extremely high level!

Look, it is not a programming paradigm, like reflection in this case, which
makes it low or high level, but the way of composition.

>>> There is no goal directed evaluation.
>> 
>> I saw this term only in the context of low-level image processing
>> primitives. Sorry, it rings wrong bells.
> 
> The goal finding operators, e.g. assignment, of a language execute
> until the expression delivers a value suitable in context (the goal
> is met).

This is not high level it is iteration. See above.

It could be high level if the language provided a framework for composition
conditions, goals and iterative tasks. Declarative languages are notably
weak in that respect.

> (I'd say that functionalists tend to fall victim to what
> they call "fusion". Many also happily ignore controlling digital
> computers.  But they call the laws of functional composition high
> level, as everyone else does.)

"High" sounds better than "low"...

>>> There are no generators with suspension.
>> 
>> Hydropneumatic suspension? Sorry.
> 
> suspend/resume on expressions. Helps with non-deterministic or
> arbitrary length data structures, execute on demand (lazy), etc.

So again, lazy becomes a substitute for high. Why? What if I proclaim that
eager evaluation is high? What if the language, like Ada, does not specify
eagerness in many cases? Which is BTW is high level, because it abstracts
away an irrelevant to the semantics aspect of evaluation.

>>> There is no symbol manipulation, no expression macros, no inference.
>> 
>> Symbol manipulation is text processing, no problem in Ada. Macros is
>> low-level source text processing. We don't need that.
> 
> The point is, you can do things with Lisp macros the you cannot
> do otherwise.

An industrial Ada compiler written in Lisp? Come on.

> Are you saying that because you can do without the
> power of a Lisp macro system in your programs, these macros must
> therefore be part of low level languages for everyone else?

No, I am saying that macro has to be a part of no language.

>> Elaborated inference is inconsistent with maintainability if not low-level.
>> Namely, if inferred things are understandable, they are trivial. If they
>> are non-trivial (as in Prolog), then nobody can predict the program's
>> behavior.
> 
> Well, you do not *want* to predict the program's behavior.
> You want to see possible solutions, that's all.

If I cannot predict it, how do I know that these are solutions? But the
aircraft hitting the ground is certainly a possible solution of flight
equations. Do you want to see it, before you jump off?

>>> There are no solvers.
>> 
>> Because Ada programs solve non-trivial problems. Anyway, declarative is not
>> a synonym of higher level.
> 
> "Declarative" may be hyped, but by all definitions I know,
> "declarative" certainly fits "higher level".

Higher than what?

> Anyway, you can
> combine solvers to solve non-trivial problems using combination
> operators, where each solver contributes to the solution.

That is what I want to see, the means of composition of declarative
descriptions. Consider these:

1: x**a + y**b = z**c
2: a = b = c
3: a in natural

Go on!

> No Ada style roll-your-own-solvers involved.

Even if provided as a library package?

> Another example: Throw a problem decomposition into the tuple space and
> wait for it to solve the problem in some way.  Maybe stepwise.

Yep, a problem like HALT(p).

Again, high level is not about the semantics of the primitive operations.
As the name suggests, a primitive operation is primitive, even if it took
millions of man-years to implement, like the MOV instruction of a modern
processor.

>>> There are no graphics primitives etc..
>> 
>> Turtle graphics must be of an extremely high level...
> 
> To me graphics primitives seem higher level than a load of
> Bitmap_View.Set_Pixel(Our_Turtle.Next_Y,
>                       Our_Turtle.Next_X, Color => Red);

But Set_Pixel *is* a graphic primitive. Why is it low-level then? I thought
your point was that everything about graphics were high level. If not, then
we must talk about what makes a graphics framework high or low level.

> I want to be able to say, "draw a bar chart using this set of data",
> *without* programming.

Which set of data? How data are composed? How the bar interacts with other
graphical objects?

Draw a bar is a *low*-level rendering command. Worse than that, it is
imperative. How awful...

>> You are making an error equating domain-specific to high level.
> 
> Like the rest of humanity continues to err by using more or less
> established language. :-)

Ah, there are so many errors "the rest of humanity" is accustomed to do,
that this one would not make any change. After all that rest uses C++...
(:-))

> High level operators from high level language just
> ease the construction of many domain-specific programs. That is,
> of almost all programs.

Nope. If it were true, there would be no need to program anything.

>> It is
>> usually reverse. Domain-specific languages are of very low level, because
>> they translate straightforwardly into the engine, which plays the role of
>> the hardware. This is low-level.
> 
> This is just two solution layers close to each other, and both at a high
> level. By a not uncommon definition, at least.  If I say,
>  Result <- M1 {times} M2
> and the parallel vector APL engine translates this into a smart
> distributed algorithm,  and the result arrives within time bounds,
> and the operator {times} with all the magic is *builtin*, and
> is portable, and adapts to the execution environment, I call that
> high level.

And it is not, because you are already at that level. It is not higher, it
is same. Higher level describes an ability to compose abstractions higher
than the granted ones. A solution always requires layered abstractions over
the domain:

Solution -----------------------------------
  |  S1           | D1                | U1
  |               |                   |
Domain            |                   |
abstraction <--> Domain-specific <--->|
                 language             | U2
                  |                   | 
                  | D2             Universal language
                  |                   | U3
                  |                   |
                  Hardware abstraction, when same

Yes a universal language has abstraction difference U2, usually covered by
a set of reusable libraries. But the reason why domain-specific languages
never made it is that D1 >> U1 for any more or less realistic S1.
Abstractions aren't additive. This is why such languages are lower-level
(D1 >> U1). They require much bigger amount of work in order to bridge S1.

Furthermore, large systems deal with many domains, so the language
impedance eats that smallness which could be left over. In fact, more than
half of human and computational resources are spent on fighting the
shortcomings of domain-specific languages and libraries.

>>  Describe a set of tables sharing some property in SQL.
> 
> A high level PL expression would be
> 
>  "make X persist",

This is not SQL. So you agree with me that SQL is low-level. 

>> In terms of abstraction level, most of domain-specific languages stop where
>> FORTRAN-IV began. That's why there is no 5GL, 6GL, 7GL... The idea was
>> wrong.
> 
> So their idea was wrong (where the 5GL is usually though of as logic
> and constraints), yours are right?  Might be, but could you please chose
> a good set of fresh words and avoid speaking of "high level languages"
> when you are referring to something else?

But n in nGL is the number of a languages generation. Where it follows from
that n1 > n2 => L1 is higher level than L2? It probably was hoped in to
happen, magically, but it did not.

>>> Use a language that allows you to say,
>>> using *builtin* operators, things like
>>>
>>>    "Give me all blocks of neighbours no more than two edges apart."
>>>
>>> That's high level.
>> 
>>    procedure Give_Me_All_Blocks...;
>> 
>> This is not high level, and I know the price.
> 
> Exactly, the Ada example is lower level, and requires
> an implementation subprogram.

Why do you care if it was not you who wrote it? Again, higher level
language provides an open-end way to build up reusable components. This is
what Ada and other popular universal languages like C++ can extremely well.
Domain-specific languages fail here miserably.

> Ada doesn't offer the high level given in the phrase above.

But it could. [Language in the theory of formal languages is equivalent to
the set of all possible programs of.]

The problem with the domain-specific languages is their incompleteness in
practice, even when a particular language is technically Turing-complete.

>> Domain-specific languages are
>> usable in domain-specific cases ... and unusable universally.
> 
> Most high-level languages are demonstrably usable universally.
> They need not be restricted to specific cases in some domain.
> And certainly, SQL is not representative of the higher level languages.

Oh, write an Ada compiler in Prolog. It must be that simple, or?

-- 
Regards,
Dmitry A. Kazakov
http://www.dmitry-kazakov.de



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-09 10:33                                       ` Dmitry A. Kazakov
@ 2008-08-11 11:51                                         ` amado.alves
  2008-08-11 13:51                                           ` Peter C. Chapin
  2008-08-13 14:03                                           ` John B. Matthews
  0 siblings, 2 replies; 41+ messages in thread
From: amado.alves @ 2008-08-11 11:51 UTC (permalink / raw)


Thanks to all.
I have succeeded by increasing the stack size of a subtask to 1G!

   task type Subtask_T;
   for Subtask_T'Storage_Size use 1_000_000_000;
   task body Subtask_T is
   begin
      Main;
   end;

This compiles and runs on GNAT on Windows sans any extra pragma or
option.
It processes the 5000 x 5000 matrix (multiplication by a vector).

I tried 200M first for the Storage_Size. Not enough. Don't know why.
5000 x 5000 << 200M. BTW the semantical maximum is Integer'Last.

On Mac OS X I simply cannot run multitasks.



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-11 11:51                                         ` amado.alves
@ 2008-08-11 13:51                                           ` Peter C. Chapin
  2008-08-11 15:37                                             ` amado.alves
  2008-08-13 14:03                                           ` John B. Matthews
  1 sibling, 1 reply; 41+ messages in thread
From: Peter C. Chapin @ 2008-08-11 13:51 UTC (permalink / raw)


amado.alves@gmail.com wrote:

> I tried 200M first for the Storage_Size. Not enough. Don't know why.
> 5000 x 5000 << 200M. BTW the semantical maximum is Integer'Last.

5000 x 5000 = 25 million. But then how large is each matrix element? If 
each element is 8 bytes your matrix is exactly 200 MB in size.

Peter



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-11 13:51                                           ` Peter C. Chapin
@ 2008-08-11 15:37                                             ` amado.alves
  0 siblings, 0 replies; 41+ messages in thread
From: amado.alves @ 2008-08-11 15:37 UTC (permalink / raw)


> 5000 x 5000 = 25 million. But then how large is each matrix element? If
> each element is 8 bytes your matrix is exactly 200 MB in size.

My real type has 10 digits. Don't know how many bytes. I guessed 4,
but maybe it's 8. Curiously enough a storage size explicitly set by
the expression 5000 * 5000 * Real_Type'Size, theoretically only enough
for the matrix, works! (I would expect more space to be required e.g.
for the vector.)

But in sum I am very happy now with my GNAT GPL 2008 on Windows Vista
Basic configuration :-)



^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: Larger matrices
  2008-08-11 11:51                                         ` amado.alves
  2008-08-11 13:51                                           ` Peter C. Chapin
@ 2008-08-13 14:03                                           ` John B. Matthews
  1 sibling, 0 replies; 41+ messages in thread
From: John B. Matthews @ 2008-08-13 14:03 UTC (permalink / raw)


In article 
<06d4e936-3fd2-49cc-983b-8581911e497e@e53g2000hsa.googlegroups.com>,
 amado.alves@gmail.com wrote:

[...]
> On Mac OS X I simply cannot run multitasks.

That's disappointing. I am able to do so on Mac OS X 10.4.11 (ppc) with 
gnat 4.3.0, and I'm told others have success on 10.5.x (i386). You might 
bring this up on the GNAT-OSX mailing list.

-- 
John B. Matthews
trashgod at gmail dot com
home dot woh dot rr dot com slash jbmatthews



^ permalink raw reply	[flat|nested] 41+ messages in thread

end of thread, other threads:[~2008-08-13 14:03 UTC | newest]

Thread overview: 41+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-08-06 13:32 Larger matrices amado.alves
2008-08-06 14:29 ` Georg Bauhaus
2008-08-06 15:01   ` amado.alves
2008-08-06 17:29     ` amado.alves
2008-08-06 17:58       ` Dmitry A. Kazakov
2008-08-06 18:40         ` amado.alves
2008-08-07  7:44           ` Dmitry A. Kazakov
2008-08-06 18:44       ` Jeffrey R. Carter
2008-08-06 19:12         ` amado.alves
2008-08-06 23:33           ` amado.alves
2008-08-07  3:02             ` Randy Brukardt
2008-08-07  6:30             ` Georg Bauhaus
2008-08-07  8:01               ` amado.alves
2008-08-07  8:55                 ` Egil Høvik
2008-08-07 19:13                 ` Jeffrey R. Carter
2008-08-08  9:59                   ` amado.alves
2008-08-08 10:38                     ` Dmitry A. Kazakov
2008-08-08 11:29                     ` Jean-Pierre Rosen
2008-08-08 13:15                       ` Jeffrey Creem
2008-08-08 13:32                         ` Dmitry A. Kazakov
2008-08-08 11:35                     ` Georg Bauhaus
2008-08-08 12:11                       ` Dmitry A. Kazakov
2008-08-08 14:11                         ` Georg Bauhaus
2008-08-08 14:36                           ` Dmitry A. Kazakov
2008-08-08 15:40                             ` Georg Bauhaus
2008-08-08 16:37                               ` Dmitry A. Kazakov
2008-08-08 17:37                                 ` Georg Bauhaus
2008-08-08 17:42                                   ` Georg Bauhaus
2008-08-08 19:51                                   ` Dmitry A. Kazakov
2008-08-09  7:44                                     ` Georg Bauhaus
2008-08-09 10:33                                       ` Dmitry A. Kazakov
2008-08-11 11:51                                         ` amado.alves
2008-08-11 13:51                                           ` Peter C. Chapin
2008-08-11 15:37                                             ` amado.alves
2008-08-13 14:03                                           ` John B. Matthews
2008-08-07 11:28             ` johnscpg
2008-08-07 12:35 ` Alex R. Mosteo
2008-08-07 13:40   ` amado.alves
2008-08-07 15:12     ` Alex R. Mosteo
2008-08-07 16:25       ` amado.alves
2008-08-07 18:21         ` amado.alves

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox