comp.lang.ada
 help / color / mirror / Atom feed
* Re: Problems with large records (GNAT) [continued]
  2001-02-28 10:44 Problems with large records (GNAT) [continued] Dr Adrian Wrigley
@ 2001-02-28  3:13 ` Robert A Duff
  2001-02-28 12:09   ` Dr Adrian Wrigley
  2001-02-28 18:35 ` Laurent Guerby
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 15+ messages in thread
From: Robert A Duff @ 2001-02-28  3:13 UTC (permalink / raw)


Dr Adrian Wrigley <amtw@linuxchip.demon.co.uk> writes:

> You might remember a while back I had a problem with 'Size failing
> on large types, using GNAT (3.12p Intel Linux (Red Hat 6.2)).

No.  But then I haven't read comp.lang.ada in months.  ;-)

> I never managed to solve that problem, and made do with keeping
> all my records under 256MB (!).

I'm curious: Why do you need such enormous records?
And if your data structures are almost as big as your 32-bit address
space, why don't you get a machine with a 64-bit address space?

> This time, I have come across another manifestation of the problem,
> which appears rather strange.  The locations for different
> elements in a record are the same.  In the example code below,
> the elements First and Last are stored in the same place.
> (you can verify this using 'Access on the two elements).

They're not aliased in the code below, so 'Access isn't allowed.
Perhaps you meant 'Address?

Anyway, I can't explain that problem, but...

>    X := Malloc (Interfaces.C.Size_T (Size * Float'Size + 2*Float'Size)); -- Hmmmm

Float'Size is in bits, whereas malloc expects a size in bytes
(or in units of sizeof(char), or whatever).

You need to divide by System.Storage_Unit, and you need to write your
code carefully to avoid overflow, since you're dealing with numbers
close to Integer'Last.

Why are you using malloc anyway?  Why not "X := new Item_t;"?

- Bob

P.S. If you think you've found a bug in GNAT, report it, and maybe
they'll fix it.



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Problems with large records (GNAT) [continued]
  2001-02-28 12:09   ` Dr Adrian Wrigley
@ 2001-02-28  9:51     ` Florian Weimer
  0 siblings, 0 replies; 15+ messages in thread
From: Florian Weimer @ 2001-02-28  9:51 UTC (permalink / raw)


Dr Adrian Wrigley <amtw@linuxchip.demon.co.uk> writes:

> I would be using a 64-bit machine, but they're a bit more expensive,
> and GNAT isn't released for Alpha/Linux :(

But for Compaq Tru64 and Sun Solaris.

> > P.S. If you think you've found a bug in GNAT, report it, and maybe
> > they'll fix it.
> 
> Probably a good idea.  But I'm impatient!

Then fix it yourself, or pay someone to fix it.

(Your workaround is incorrect, and you should always compile your
programs with '-gnato'.)



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Problems with large records (GNAT) [continued]
@ 2001-02-28 10:44 Dr Adrian Wrigley
  2001-02-28  3:13 ` Robert A Duff
                   ` (3 more replies)
  0 siblings, 4 replies; 15+ messages in thread
From: Dr Adrian Wrigley @ 2001-02-28 10:44 UTC (permalink / raw)


Hi all!

You might remember a while back I had a problem with 'Size failing
on large types, using GNAT (3.12p Intel Linux (Red Hat 6.2)).
I never managed to solve that problem, and made do with keeping
all my records under 256MB (!).

This time, I have come across another manifestation of the problem,
which appears rather strange.  The locations for different
elements in a record are the same.  In the example code below,
the elements First and Last are stored in the same place.
(you can verify this using 'Access on the two elements).

I wish to use large records mapped into memory using the "mmap" call,
and obtain fast, random access to all the elements.  Subsequent runs
of the program can use the same data.  This is an order of magnitude
faster (sometimes two) than non-mmap solutions I tried.

I'm wary of rewriting the code significantly, because the same problem
may occur in the new code.  Maybe some linked structure could do it,
but then it would always need to be loaded in the same place.

Any ideas/comments?  Does this example work with other compilers on
32-bit architectures?

How can I work out how much storage to allocate, when 'Size and
'Max_Size_In_Storage_Elements can't handle it?

In this example, the code generates the wrong answer silently.
I dont like that...
-----------
with Text_IO;
with Interfaces.C;

procedure Dum is

   Size : Integer := 256*1024*1024 - 1;

   type Big_T is array (1 .. Size) of Float;

   type Item_T is record
      First : Float;
      Item  : Big_T;
      Last  : Float;
   end record;

   type Item_A is access all Item_T;

   function Malloc (Size : Interfaces.C.Size_t) return Item_A;
   pragma Import (C, Malloc, "malloc");

   X : Item_A;

begin

   X := Malloc (Interfaces.C.Size_T (Size * Float'Size + 2*Float'Size)); -- Hmmmm

   X.First := 3.14159;
   X.Last  := 2.71828;

   Text_IO.Put_Line ("First is " & Float'Image (X.First) &
                     " last is " & Float'Image (X.Last));

end Dum;
----------
$ gnatmake dum
$ dum
First is  2.71828E+00 last is  2.71828E+00
$
----------
Dr Adrian Wrigley



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Problems with large records (GNAT) [continued]
  2001-02-28  3:13 ` Robert A Duff
@ 2001-02-28 12:09   ` Dr Adrian Wrigley
  2001-02-28  9:51     ` Florian Weimer
  0 siblings, 1 reply; 15+ messages in thread
From: Dr Adrian Wrigley @ 2001-02-28 12:09 UTC (permalink / raw)


Robert A Duff wrote:
> I'm curious: Why do you need such enormous records?
> And if your data structures are almost as big as your 32-bit address
> space, why don't you get a machine with a 64-bit address space?

I tried to explain that below... I want to map data from a filing system
directly into the program's memory space.

I would be using a 64-bit machine, but they're a bit more expensive,
and GNAT isn't released for Alpha/Linux :(
Since a PC can be loaded with 2048MB RAM nowadays, it is frustrating
to struggle with code hitting the 256M limit.

It is very easy to write code which breaks unexpectedly in "normal" use,
due to this capacity limit.  For example, in image processing, you might
read in a 6000x4000 pixel color image, and compute the spectrum (eg FFT)
as floats.  An apparently reasonable implementation in Ada/GNAT/Intel breaks
(whereas the equivalent program in C works fine).  Images of this
resolution are standard in the 70mm film world.

> > This time, I have come across another manifestation of the problem,
> > which appears rather strange.  The locations for different
> > elements in a record are the same.  In the example code below,
> > the elements First and Last are stored in the same place.
> > (you can verify this using 'Access on the two elements).
> 
> They're not aliased in the code below, so 'Access isn't allowed.
> Perhaps you meant 'Address?

I took the "aliased" out, since it was redundant in the actual code.

> Anyway, I can't explain that problem, but...
> 
> >    X := Malloc (Interfaces.C.Size_T (Size * Float'Size + 2*Float'Size)); -- Hmmmm
> 
> Float'Size is in bits, whereas malloc expects a size in bytes
> (or in units of sizeof(char), or whatever).

Damn!  I had originally used "4" in place of Float'Size. The value passed
the Malloc should be 16#4000004#, or thereabouts.  As written in my
post, it only gets 32, due to truncation.  I didn't notice
this because the code doesn't access the high end of the memory.

> You need to divide by System.Storage_Unit, and you need to write your
> code carefully to avoid overflow, since you're dealing with numbers
> close to Integer'Last.

This, of course is the crux of the problem.  We're dealing with
numbers exceeding Integer'Last, and wrapping makes it unworkable.

> Why are you using malloc anyway?  Why not "X := new Item_t;"?

I find that "new" gives me problems with large types.
I think maybe GNAT sometimes optimises "new" to use the stack, and
then runs out of space.  Malloc is happy to give all the memory the
system has (and on Linux, considerably more :().

> P.S. If you think you've found a bug in GNAT, report it, and maybe
> they'll fix it.

Probably a good idea.  But I'm impatient!
--
Dr Adrian Wrigley



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Problems with large records (GNAT) [continued]
  2001-02-28 10:44 Problems with large records (GNAT) [continued] Dr Adrian Wrigley
  2001-02-28  3:13 ` Robert A Duff
@ 2001-02-28 18:35 ` Laurent Guerby
  2001-03-01  8:17   ` Dr Adrian Wrigley
  2001-03-02 20:32 ` Randy Brukardt
  2001-03-07  2:15 ` Dr Adrian Wrigley
  3 siblings, 1 reply; 15+ messages in thread
From: Laurent Guerby @ 2001-02-28 18:35 UTC (permalink / raw)


Dr Adrian Wrigley <amtw@linuxchip.demon.co.uk> writes:
>    type Big_T is array (1 .. Size) of Float;
> 
>    type Item_T is record
>       First : Float;
>       Item  : Big_T;
>       Last  : Float;
>    end record;

Why do you want to use a record here? Looks like 100% array here.

Also, if you use a clean abstraction to your data structure (private
type), you can do all sorts of hacks behind the scene.

If you could describe more precisely the data structure you're trying
to mmap, perhaps comp.lang.ada readers could help a bit more.

-- 
Laurent Guerby <guerby@acm.org>



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Problems with large records (GNAT) [continued]
  2001-03-01  8:17   ` Dr Adrian Wrigley
@ 2001-03-01  1:58     ` Robert A Duff
  2001-03-01 22:18       ` Dr Adrian Wrigley
  2001-03-01  7:00     ` tmoran
  2001-03-01 19:38     ` Laurent Guerby
  2 siblings, 1 reply; 15+ messages in thread
From: Robert A Duff @ 2001-03-01  1:58 UTC (permalink / raw)


Dr Adrian Wrigley <amtw@linuxchip.demon.co.uk> writes:

> If I increase the number of stocks to 5000, things break and significant
> changes are necessary because the 256M limit is exceeded.

I didn't see the program that shows this 256M limit.  Please post it
(again?), along with its output.  Make sure your program is compiled
with all run-time checks turned on, which is not the default in GNAT
(unfortunately).

- Bob



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Problems with large records (GNAT) [continued]
  2001-03-01  8:17   ` Dr Adrian Wrigley
  2001-03-01  1:58     ` Robert A Duff
@ 2001-03-01  7:00     ` tmoran
  2001-03-01 21:52       ` Dr Adrian Wrigley
  2001-03-01 19:38     ` Laurent Guerby
  2 siblings, 1 reply; 15+ messages in thread
From: tmoran @ 2001-03-01  7:00 UTC (permalink / raw)


>  type PriceSummary_T is record
>     OpenPrice   : Float;
>     HighPrice   : Float;
>     LowPrice    : Float;
>     ClosePrice  : Float; -- Split and dividend corrected price
>     UncorrectedClose : Float; -- Raw share price
>     Volume      : Integer;
>     Time        : Time_T;
>  end record;
  I realize you are interested in the generic large-memory-with-Gnat
problem, but, as they say, sometimes it's better to improve the
algorithm than the hardware.  I worked on a commercial system
(Technical Tools Co.) with stock (and commodity) historical data of
this type, using DPMI DOS and Windows 3.1 (and Ada 9X).  For
historical data, stock prices can be 16 bit fixed point with delta of
1/8, rather than 32 bit floats (excluding Berkshire-Hathaway).  Even
with prices in pennys nowadays, 24 bits should be quite enough for a
stock price.  Similarly, a 24 bit Volume (16 million shares of one
stock traded in one day) should be normally be adequate, perhaps with
an exception list for anything that doesn't fit.  A sixteen bit fixed
point value, with suitable delta, should be fine for holding the
split correction, or 24 bits if you really want to allow for even the
most bizarre changes.  I don't know what kind of processing you are
doing, but usually one processes a small number of complete time
series, or the complete market for just a few days, so only a few
rows or columns of the complete matrix need be in RAM at any one time.



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Problems with large records (GNAT) [continued]
  2001-02-28 18:35 ` Laurent Guerby
@ 2001-03-01  8:17   ` Dr Adrian Wrigley
  2001-03-01  1:58     ` Robert A Duff
                       ` (2 more replies)
  0 siblings, 3 replies; 15+ messages in thread
From: Dr Adrian Wrigley @ 2001-03-01  8:17 UTC (permalink / raw)


Laurent Guerby wrote:
> 
> Dr Adrian Wrigley <amtw@linuxchip.demon.co.uk> writes:
> >    type Big_T is array (1 .. Size) of Float;
> >
> >    type Item_T is record
> >       First : Float;
> >       Item  : Big_T;
> >       Last  : Float;
> >    end record;
> 
> Why do you want to use a record here? Looks like 100% array here.

This example shows the problem.  Even if I use an simple array, I still can't
calculate the size using the 'Size attribute, since that always
raises an exception with the size of array used in some computations.
With a record, I can add whatever fields I need later, without recoding
stuff that already works.

> Also, if you use a clean abstraction to your data structure (private
> type), you can do all sorts of hacks behind the scene.

Actually, I tried to use a generic package which could map data
onto a file, hiding the implementation details.  What I want
is really a persistent object store, allowing different utilities
to access the same data.  It could be done with a standard
database binding (eg using SQL), but the performance is much
better with mmapped records, provided the 256M limit isn't broken.

I tried using different "hacks" to get it to work with big records
until I found this fundamental problem always caused the code
to break.  I have found no way to work out the size of the record
reliably.

> If you could describe more precisely the data structure you're trying
> to mmap, perhaps comp.lang.ada readers could help a bit more.

Since you ask, I have something like the following...
------------------------------------
   type PriceSummary_T is record
      OpenPrice   : Float;
      HighPrice   : Float;
      LowPrice    : Float;
      ClosePrice  : Float; -- Split and dividend corrected price
      UncorrectedClose : Float; -- Raw share price
      Volume      : Integer;
      Time        : Time_T;
   end record;

   type StockIndex_T is new Integer range 1 .. 3000;
   type TradingDay_T is new Integer range 0 .. 1860;

   type DailyArray_T is array (TradingDay_T, StockIndex_T) of PriceSummary_T;

   type BooleanDailyArray_T is array (TradingDay_T, StockIndex_T) of Boolean;
   pragma Pack (BooleanDailyArray_T);

   type Oracle_T is record
-- Various other record values may go here!
      Tradeable    : BooleanDailyArray_T;
      Daily        : DailyArray_T;
   end record;
------------------------------------
I then want to store Oracle_T representing 3000 stocks over seven years
in a single file for efficient access by various utility programs.

If I increase the number of stocks to 5000, things break and significant
changes are necessary because the 256M limit is exceeded.
Sometimes, the wrong answers are produced, which is particularly worrying.
--
Dr Adrian Wrigley



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Problems with large records (GNAT) [continued]
  2001-03-01 22:18       ` Dr Adrian Wrigley
@ 2001-03-01 17:02         ` Robert A Duff
  0 siblings, 0 replies; 15+ messages in thread
From: Robert A Duff @ 2001-03-01 17:02 UTC (permalink / raw)


Dr Adrian Wrigley <amtw@linuxchip.demon.co.uk> writes:

>    type Big_T is array (0 .. 64*1024*1024) of Float;
> 
> begin
> 
>    Text_IO.Put_Line ("Size of Big_T is " &
>                      Integer'Image (Big_T'Size / System.Storage_Unit));

The above fails because you're implicitly converting Big_T'Size to
Integer, which is too small.  That is, the conversion happens before the
divide.  Try this:

    type Big_Integer is range 0 .. System.Storage_Unit * (2**32);

    Put_Line(Big_Integer'Image(Big_T'Size / System.Storage_Unit));

>    Text_IO.Put_Line ("Size of Big_T is " &
>                      Integer'Image (Big_T'Max_size_in_storage_elements));

I don't know about that one.

- Bob



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Problems with large records (GNAT) [continued]
  2001-03-01 21:52       ` Dr Adrian Wrigley
@ 2001-03-01 19:32         ` tmoran
  0 siblings, 0 replies; 15+ messages in thread
From: tmoran @ 2001-03-01 19:32 UTC (permalink / raw)


>a lot of splits and dividends in their history have very
>small prices back in the '70s.
>...
>With volume, I think that really needs to better than 32 bit range.
>Once you start to calculate weekly or monthly volumes, quite a number
>...
>Fixed point for this wide ranging data doesn't give me the confidence
>I want from a (mission critical) financial application.

  Of course we were in a "minimize storage" mode for speed and
because customers downloaded the data over 14,400 baud modems.  So
internal computation could use large, or even floating point,
variables, but stored data was as compact as possible.  For instance,
the original prices were stored in 16 bits scaled, but were
converted, then multiplied by a "split factor", for computation.
Similarly for aggregated volumes.  Many customers did very trivial
arithmetic, but would be unhappy over rounding errors, so float
(especially for commodities with non power-of-two scale factors)
was undesirable for the raw data.



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Problems with large records (GNAT) [continued]
  2001-03-01  8:17   ` Dr Adrian Wrigley
  2001-03-01  1:58     ` Robert A Duff
  2001-03-01  7:00     ` tmoran
@ 2001-03-01 19:38     ` Laurent Guerby
  2 siblings, 0 replies; 15+ messages in thread
From: Laurent Guerby @ 2001-03-01 19:38 UTC (permalink / raw)


Dr Adrian Wrigley <amtw@linuxchip.demon.co.uk> writes:
> Since you ask, I have something like the following...
> ------------------------------------
>    type PriceSummary_T is record
>       OpenPrice   : Float;
>       HighPrice   : Float;
>       LowPrice    : Float;
>       ClosePrice  : Float; -- Split and dividend corrected price
>       UncorrectedClose : Float; -- Raw share price
>       Volume      : Integer;
>       Time        : Time_T;
>    end record;
> 
>    type StockIndex_T is new Integer range 1 .. 3000;
>    type TradingDay_T is new Integer range 0 .. 1860;
> 
>    type DailyArray_T is array (TradingDay_T, StockIndex_T) of PriceSummary_T;
> 
>    type BooleanDailyArray_T is array (TradingDay_T, StockIndex_T) of Boolean;
>    pragma Pack (BooleanDailyArray_T);
> 
>    type Oracle_T is record
> -- Various other record values may go here!
>       Tradeable    : BooleanDailyArray_T;
>       Daily        : DailyArray_T;
>    end record;
> ------------------------------------

Nice to see we're not alone using Ada for critical applications in the
financial world ;-). (Note that we have a support contract from Ada
Core Technologies, if you have one you can probably discuss the record
size limitation with them.)

Anyway, here I would probably want a few things:

- control the layout of the data to match computational access, if you
process your serie along time or stock.

- lazy loading (ie not mmap'ing the whole thing at once).

To do so, I wouldn't put all the things in the record, I would use
access types hidden behind proper abstraction. 

type PriceSummary_TV is array (Positive) of aliased PriceSummary_T;
type PriceSummary_TVA is access all PriceSummary_TV;

type Oracle_T is record
   N1 : Natural := 0;
   P1 : PriceSummary_TVA;
end record;

You mmap your stock history and store the pointer to it in your
Oracle_T (unchecked_convertion from the address returned by mmap to
PriceSummary_TVA). You need to use array of P1/N1 to implement
everything.  The "aliased" allow you to access any record by access
instead of copy when you're hiding things. N1 is needed to do manual
checking in your abstraction.

Note that if you compute historical volatilities or correlations, you
will probably be memory-bound performance wise (algorithm complexity
proportional to data size, with a small enough computation factor so
that memory access becomes dominant).

-- 
Laurent Guerby <guerby@acm.org>



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Problems with large records (GNAT) [continued]
  2001-03-01  7:00     ` tmoran
@ 2001-03-01 21:52       ` Dr Adrian Wrigley
  2001-03-01 19:32         ` tmoran
  0 siblings, 1 reply; 15+ messages in thread
From: Dr Adrian Wrigley @ 2001-03-01 21:52 UTC (permalink / raw)


tmoran@acm.org wrote:
> I realize you are interested in the generic large-memory-with-Gnat
> problem, but, as they say, sometimes it's better to improve the
> algorithm than the hardware.

I have the same frustration with this problem as with things like
segmented memory architectures, short index registers etc.
They all tend to result in less robust code, or a lot more work.
Hitting one of the various memory limits is one of the
common problems I encounter running GNAT/Linux.

I plan to go to (partial) intra-day data sometime, so that will
need a better representation.

...
> For historical data, stock prices can be 16 bit fixed point with delta of
> 1/8, rather than 32 bit floats (excluding Berkshire-Hathaway).  Even
> with prices in pennys nowadays, 24 bits should be quite enough for a
> stock price.  Similarly, a 24 bit Volume (16 million shares of one
> stock traded in one day) should be normally be adequate, perhaps with
> an exception list for anything that doesn't fit.  A sixteen bit fixed
> point value, with suitable delta, should be fine for holding the
> split correction, or 24 bits if you really want to allow for even the
> most bizarre changes.

I decided that 16 bits was inadequate.  Even with prices in the
range $0.05 to $500, you need 20 bits to accommodate a delta
representing 1% at the bottom end.  Companies that have had
a lot of splits and dividends in their history have very
small prices back in the '70s. Perhaps a 16 bit logarithm of
the share price would be OK. (and even speed up volatility
calculations!)

With volume, I think that really needs to better than 32 bit range.
Once you start to calculate weekly or monthly volumes, quite a number
of companies exceed 2**32 shares. (and in some countries, they
even trade fractional shares routinely).  Maybe you've seen the
WWW sites of historic data that show Intel's monthly share volume
as things like "-1518500200 shares".  I mentioned this problem
to Yahoo nearly a year ago, but they haven't fixed it.

When it comes down to it, it is a matter of confidence and simplicity.
Fixed point for this wide ranging data doesn't give me the confidence
I want from a (mission critical) financial application.
I hadn't thought of using 24 bit values, and I think they would
not be worthwhile here given the issues involved.

>   I don't know what kind of processing you are
> doing, but usually one processes a small number of complete time
> series, or the complete market for just a few days, so only a few
> rows or columns of the complete matrix need be in RAM at any one time.

That's why I want a very fast data access method...  I want to
scan all the stocks over all the times.  Sometimes I access the data
sparsely as well.  With mmap, the data from one invocation to another
remain in RAM, and can be completely scanned in only a few seconds.
Maybe someday there will be a standard persistent object store
package in the Ada standard.  Loading data from files into RAM
tends to be amazingly slow, when the file and the in-memory
representation are both as big as the physical memory - and
my machine has no free memory slots :(
--
Adrian Wrigley



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Problems with large records (GNAT) [continued]
  2001-03-01  1:58     ` Robert A Duff
@ 2001-03-01 22:18       ` Dr Adrian Wrigley
  2001-03-01 17:02         ` Robert A Duff
  0 siblings, 1 reply; 15+ messages in thread
From: Dr Adrian Wrigley @ 2001-03-01 22:18 UTC (permalink / raw)


Robert A Duff wrote:
> 
> Dr Adrian Wrigley <amtw@linuxchip.demon.co.uk> writes:
> 
> > If I increase the number of stocks to 5000, things break and significant
> > changes are necessary because the 256M limit is exceeded.
> 
> I didn't see the program that shows this 256M limit.  Please post it
> (again?), along with its output.  Make sure your program is compiled
> with all run-time checks turned on, which is not the default in GNAT
> (unfortunately).

The procedure "Dum" in the original post fails when Item_T reaches
256M - bizarrely with a STORAGE_ERROR for 256M-512M and 768-1024M,
and silently for 512M-768M and 1024-1280M. (assuming the size
in bytes is correctly calculated - as you pointed out, and
assuming you can successfully malloc that much space).
You saw this demonstration of the limit.  But I'll give you
another example... 

The following procedure "Big" simply generates a constraint error,
because the array exceeds the 256M limit.

$ gnatmake -gnato big
gnatgcc -c -gnato big.adb
big.adb:10:64: warning: Constraint_Error will be raised at run-time
big.adb:11:64: warning: Constraint_Error will be raised at run-time


-----------------------
with System;
with Text_IO;

procedure Big is

   type Big_T is array (0 .. 64*1024*1024) of Float;

begin

   Text_IO.Put_Line ("Size of Big_T is " &
                     Integer'Image (Big_T'Size / System.Storage_Unit));
   Text_IO.Put_Line ("Size of Big_T is " &
                     Integer'Image (Big_T'Max_size_in_storage_elements));

end Big;
-----------------------
Adrian



^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Problems with large records (GNAT) [continued]
  2001-02-28 10:44 Problems with large records (GNAT) [continued] Dr Adrian Wrigley
  2001-02-28  3:13 ` Robert A Duff
  2001-02-28 18:35 ` Laurent Guerby
@ 2001-03-02 20:32 ` Randy Brukardt
  2001-03-07  2:15 ` Dr Adrian Wrigley
  3 siblings, 0 replies; 15+ messages in thread
From: Randy Brukardt @ 2001-03-02 20:32 UTC (permalink / raw)


Dr Adrian Wrigley wrote in message
<3A9CD67C.9B15C417@linuxchip.demon.co.uk>...
>Hi all!
>
>You might remember a while back I had a problem with 'Size failing
>on large types, using GNAT (3.12p Intel Linux (Red Hat 6.2)).
>I never managed to solve that problem, and made do with keeping
>all my records under 256MB (!).


I tried this for grins on Janus/Ada. I get

** Unhandled CONSTRAINT_ERROR
   Value of literal out of the base type

which makes sense, because Janus/Ada only has 32-bit integers.
Moreover, it won't work right, because Big_T has a dynamic size. How
compilers allocate that will vary, but it doesn't necessarily have to be
continugous. (Indeed, Item_T'Size = 32*8 on Janus/Ada; the
array is stored separately.)

So, I tried using "new" instead, with the index type of Big_T being
static. Here's the result:

First is  3.14159E+00 last is  2.71828E+00

Sounds to me like you need a different compiler.  Our phone number is
1-800-722-3248 or visit www.rrsoftware.com. :-)

(I would not expect a problem with a program like this in Janus/Ada,
since it does all of its memory address calculations in units of bytes.
But then again, I won't expect it in any other compiler either.)

                Randy Brukardt.

-- Here's the program I ran:

with Text_IO;
procedure Dum2 is

   Size : constant := 256*1024*1024 - 1;
   type Index_Type is range 1 .. Size;
   type Big_T is array (Index_Type) of Float;

   type Item_T is record
      First : Float;
      Item  : Big_T;
      Last  : Float;
   end record;

   type Item_A is access all Item_T;

   X : Item_A;

begin

   X := new Item_T;

   X.First := 3.14159;
   X.Last  := 2.71828;

   Text_IO.Put_Line ("First is " & Float'Image (X.First) &
                     " last is " & Float'Image (X.Last));

end Dum2;









^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: Problems with large records (GNAT) [continued]
  2001-02-28 10:44 Problems with large records (GNAT) [continued] Dr Adrian Wrigley
                   ` (2 preceding siblings ...)
  2001-03-02 20:32 ` Randy Brukardt
@ 2001-03-07  2:15 ` Dr Adrian Wrigley
  3 siblings, 0 replies; 15+ messages in thread
From: Dr Adrian Wrigley @ 2001-03-07  2:15 UTC (permalink / raw)


I've just noticed...

If I add the line "pragma Shared_Passive;" to my package, then objects
declared in the package turn up on the local file system!

This gives the persistent storage feature I wanted, since the
values of the data are retained between program invocations.
It doesn't solve the colocation of record elements problem.

Unfortunately, the performance is worse than abyssmal (GNAT/GLADE/Intel etc.),
and it can only write 300 bytes/sec (1GHz Athlon)!

Another 100_000x in speed, and it would *almost* be a solution.

Does anyone else use "Shared_Passive" for persistent storage?
--
Adrian Wrigley



^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2001-03-07  2:15 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2001-02-28 10:44 Problems with large records (GNAT) [continued] Dr Adrian Wrigley
2001-02-28  3:13 ` Robert A Duff
2001-02-28 12:09   ` Dr Adrian Wrigley
2001-02-28  9:51     ` Florian Weimer
2001-02-28 18:35 ` Laurent Guerby
2001-03-01  8:17   ` Dr Adrian Wrigley
2001-03-01  1:58     ` Robert A Duff
2001-03-01 22:18       ` Dr Adrian Wrigley
2001-03-01 17:02         ` Robert A Duff
2001-03-01  7:00     ` tmoran
2001-03-01 21:52       ` Dr Adrian Wrigley
2001-03-01 19:32         ` tmoran
2001-03-01 19:38     ` Laurent Guerby
2001-03-02 20:32 ` Randy Brukardt
2001-03-07  2:15 ` Dr Adrian Wrigley

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox