From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00,FREEMAIL_FROM autolearn=ham autolearn_force=no version=3.4.4 X-Google-Thread: 103376,90e73a4812231b54 X-Google-Attributes: gid103376,public,usenet X-Google-Language: ENGLISH,ASCII-7-bit Path: g2news1.google.com!postnews.google.com!x35g2000prf.googlegroups.com!not-for-mail From: mhamel_98@yahoo.com Newsgroups: comp.lang.ada Subject: Re: Memory Useage Date: Fri, 08 Jun 2007 20:09:38 -0700 Organization: http://groups.google.com Message-ID: <1181358578.893768.176720@x35g2000prf.googlegroups.com> References: <1181335115.659050.135860@q69g2000hsb.googlegroups.com> <1181349804.839474.212720@r19g2000prf.googlegroups.com> NNTP-Posting-Host: 24.4.68.212 Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" X-Trace: posting.google.com 1181358579 6753 127.0.0.1 (9 Jun 2007 03:09:39 GMT) X-Complaints-To: groups-abuse@google.com NNTP-Posting-Date: Sat, 9 Jun 2007 03:09:39 +0000 (UTC) In-Reply-To: <1181349804.839474.212720@r19g2000prf.googlegroups.com> User-Agent: G2/1.0 X-HTTP-UserAgent: Mozilla/5.0 (Macintosh; U; PPC Mac OS X Mach-O; en-US; rv:1.8.0.12) Gecko/20070508 Firefox/1.5.0.12,gzip(gfe),gzip(gfe) Complaints-To: groups-abuse@google.com Injection-Info: x35g2000prf.googlegroups.com; posting-host=24.4.68.212; posting-account=RO8m9AwAAAB418WhNxD6U0JmFC9jLoK1 Xref: g2news1.google.com comp.lang.ada:16131 Date: 2007-06-08T20:09:38-07:00 List-Id: On Jun 8, 5:43 pm, Adam Beneschan wrote: > On Jun 8, 1:38 pm, mhamel...@yahoo.com wrote: > > > > > Hello c.l.a. Another question, I have a program that stores data on > > the disk using sequential_io. When I later read that data into an > > array, the memory growth after ingesting a file is much much larger > > than the disk footprint. A file that takes 26.8MB on disk (over 134k > > records) causes the program to swell by over 600MB! Holy bloatware. > > A short overview of what I'm trying to do - each sequential_io data > > file has an associated header file with stuff like number of records, > > etc. The header is read, and an array is then created based on how > > many records are said to be in the data file. The data file is then > > read, sticking a node into the array. Some abbreviated code below, > > the spec: > > > generic > > type Node_Type is private; > > package Node_Manager is > > > package Seq is new Sequential_Io (Node_Type); > > > type Node_Array is array (positive range <>) of Node_Type; > > type Node_Ptr is access Node_Array; > > > type Data_Rec is > > record > > Hdr : Node_Hdr; > > Data : Node_Ptr; > > end record; > > > Body stuff: > > > procedure Free is new Unchecked_Deallocation (Node_Array, Node_Ptr); > > procedure Open (File : in out Data_Rec; > > Name : in String) is > > Curr : Positive := 1; > > Node : Node_Type; > > begin > > Read_Hdr (Name, File.Hdr); > > File.Data := new Node_Array (1 .. File.Hdr.Size); > > > Seq.Open (Dat_File, Seq.In_File, Name & ".dat"); > > while not Seq.End_of_File (Dat_File) loop > > Seq.Read (Dat_File, Node); > > File.data.all (Curr) := Node; > > Curr := Curr + 1; > > end loop; > > Seq.Close (Dat_File); > > ... > > > The program works as I've wanted, though up until recently I've only > > dealt with very small data sets, which is why I've never noticed undue > > memory growth. Now that I'm working with some "large" data sets, the > > bloat is unbearable. Any suggestions? (Besides look for another line > > of work ;) ) > > Platform is ObjectAda 7.2 on WinNT. > > You sure File.Hdr.Size is correct? (I.e. is it the same as the number > of records in the file?) > > -- Adam Yep, pretty sure the size is correct. There is an internal confidence test I developed sometime ago, I'll run it on this data set on Monday, but things down the road in the program break if the size is incorrectly over- or under-stated in the header.