From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,14bff0642983a2a5,start X-Google-Attributes: gid103376,public X-Google-ArrivalTime: 2003-07-28 08:29:21 PST Path: archiver1.google.com!news1.google.com!newsfeed.stanford.edu!logbridge.uoregon.edu!feed2.news.rcn.net!rcn!nntp.abs.net!news2.wam.umd.edu!not-for-mail From: "Brien L. Christesen" Newsgroups: comp.lang.ada Subject: sorting large numbers of large records Date: Mon, 28 Jul 2003 15:29:20 +0000 (UTC) Organization: University of Maryland College Park Message-ID: NNTP-Posting-Host: rac2.wam.umd.edu X-Trace: grapevine.wam.umd.edu 1059406160 27216 128.8.10.142 (28 Jul 2003 15:29:20 GMT) X-Complaints-To: abuse@wam.umd.edu NNTP-Posting-Date: Mon, 28 Jul 2003 15:29:20 +0000 (UTC) User-Agent: tin/1.4.2-20000205 ("Possession") (UNIX) (SunOS/5.7 (sun4u)) Xref: archiver1.google.com comp.lang.ada:40899 Date: 2003-07-28T15:29:20+00:00 List-Id: First off, I'm an Ada newbie, I've only been using it for about a month now. I am trying to sort ~1,000,000 (with potential to grow to 60 mil) records of 256 bytes each in Ada. The records are written to a binary file. Right now, I read in the file, and divide it into files greater than and less than a pivot value. I then recursively call this division function until it reaches a set less than 100,000 records (if I make this number any larger I get a stack overflow on my windows box) it creates an array of the records, calls quicksort on it, and appends the result to a final results file. I couldn't really find much discussion of doing external sorts in Ada (or much literature on external sorts at all) so I'm not sure if this is a good approach to be taking, or if there is a better way to do the sort. I tried creating an index, sorting the index, and then creating the results file based on the index. Jumping around the file with direct_io led to awful results, though. The execution time of the part of the code that read the index, got the corresponding value from the large record file, and wrote that record to a new file grew exponentially. Does anyone know of a good way to do this kind of sort? Thanks in advance for any responses.