From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,FREEMAIL_FROM autolearn=ham autolearn_force=no version=3.4.4 X-Google-Thread: 103376,90e73a4812231b54,start X-Google-Attributes: gid103376,public,usenet X-Google-Language: ENGLISH,ASCII-7-bit Path: g2news1.google.com!postnews.google.com!q69g2000hsb.googlegroups.com!not-for-mail From: mhamel_98@yahoo.com Newsgroups: comp.lang.ada Subject: Memory Useage Date: Fri, 08 Jun 2007 13:38:35 -0700 Organization: http://groups.google.com Message-ID: <1181335115.659050.135860@q69g2000hsb.googlegroups.com> NNTP-Posting-Host: 155.104.37.17 Mime-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" X-Trace: posting.google.com 1181335115 8102 127.0.0.1 (8 Jun 2007 20:38:35 GMT) X-Complaints-To: groups-abuse@google.com NNTP-Posting-Date: Fri, 8 Jun 2007 20:38:35 +0000 (UTC) User-Agent: G2/1.0 X-HTTP-UserAgent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.0; MathPlayer 2.0; .NET CLR 1.1.4322; .NET CLR 2.0.50727),gzip(gfe),gzip(gfe) Complaints-To: groups-abuse@google.com Injection-Info: q69g2000hsb.googlegroups.com; posting-host=155.104.37.17; posting-account=RO8m9AwAAAB418WhNxD6U0JmFC9jLoK1 Xref: g2news1.google.com comp.lang.ada:16126 Date: 2007-06-08T13:38:35-07:00 List-Id: Hello c.l.a. Another question, I have a program that stores data on the disk using sequential_io. When I later read that data into an array, the memory growth after ingesting a file is much much larger than the disk footprint. A file that takes 26.8MB on disk (over 134k records) causes the program to swell by over 600MB! Holy bloatware. A short overview of what I'm trying to do - each sequential_io data file has an associated header file with stuff like number of records, etc. The header is read, and an array is then created based on how many records are said to be in the data file. The data file is then read, sticking a node into the array. Some abbreviated code below, the spec: generic type Node_Type is private; package Node_Manager is package Seq is new Sequential_Io (Node_Type); type Node_Array is array (positive range <>) of Node_Type; type Node_Ptr is access Node_Array; type Data_Rec is record Hdr : Node_Hdr; Data : Node_Ptr; end record; Body stuff: procedure Free is new Unchecked_Deallocation (Node_Array, Node_Ptr); procedure Open (File : in out Data_Rec; Name : in String) is Curr : Positive := 1; Node : Node_Type; begin Read_Hdr (Name, File.Hdr); File.Data := new Node_Array (1 .. File.Hdr.Size); Seq.Open (Dat_File, Seq.In_File, Name & ".dat"); while not Seq.End_of_File (Dat_File) loop Seq.Read (Dat_File, Node); File.data.all (Curr) := Node; Curr := Curr + 1; end loop; Seq.Close (Dat_File); ... The program works as I've wanted, though up until recently I've only dealt with very small data sets, which is why I've never noticed undue memory growth. Now that I'm working with some "large" data sets, the bloat is unbearable. Any suggestions? (Besides look for another line of work ;) ) Platform is ObjectAda 7.2 on WinNT.