From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,a1868a27625c21e7 X-Google-Attributes: gid103376,public From: Al Christians Subject: Re: Looking for keyed file package Date: 1999/09/22 Message-ID: <37E8F0C1.A49C3E2D@easystreet.com>#1/1 X-Deja-AN: 528254783 Content-Transfer-Encoding: 7bit References: <37E817C6.80ED41E0@easystreet.com> <7saii8$5bl$1@nnrp1.deja.com> X-Accept-Language: en Content-Type: text/plain; charset=us-ascii X-Trace: typ12.nn.bcandid.com 938012677 206.103.58.75 (Wed, 22 Sep 1999 11:04:37 EDT) Organization: Trillium Resources Corporation MIME-Version: 1.0 NNTP-Posting-Date: Wed, 22 Sep 1999 11:04:37 EDT Newsgroups: comp.lang.ada Date: 1999-09-22T00:00:00+00:00 List-Id: >> It would be interesting to understand why you are doing things "by hand" here at such a low level, rather than use a DB. << The appliction is to use the keyed file to collate data from multiple sources. It's sort of a brief holding tank for data. Pick up a file here, a file there, keep accumulating the information, then spit out sequential files for a different application to process on a different machine. The total lifecycle of the application is just a couple of weeks. The number of simultaneous users of the file will probably never exceed 1. It's not worth buying a database for, and the speed objectives, processing hundreds of thousands of records in much less than an hour, and the large file sizes make me reluctant to try MS Access, the only database likely to be available on the target machines. All this data is going to come together next month at a site that I have not yet visited. Simpler is better when preparing to walk into a new location and run from a cold start. I don't know if their version of Access is the same as the one I could use here to develop and test, what the differences might be, or how much trouble they might cause. I don't want to find out. I just need the simplest thing that will work. Low-cost doesn't hurt, either. I did try each of Pascal Obry's packages and the one from Ada-Belgium with no success. I think that the Ada Belgium package has an off by one error that causes a constraint error when there is an unfortunate coincidence of the start of a record with the start of a block of data in the file. It would load my data and let me access it, but about 1000 records into the file, it would crash with a subscript computed as (something mod 512) being used as an index into an array(1..512). I'll have to go back and reconstruct my code to re-create the problems I was getting with Pascal's modules. That won't be hard -- he's designed them with a very easy to use interface. Unfortunately, I'm doing this under an extreme time crunch, so I think I'm going to be going even more toward the 'do-it-all-by-hand' position using direct IO. I'll just create the index in memory each time I run by reading the entire file. Up to a million records should fit fine. Al