From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-0.4 required=5.0 tests=AC_FROM_MANY_DOTS,BAYES_00 autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,794c64d1f9164710 X-Google-Attributes: gid103376,public X-Google-ArrivalTime: 2002-02-22 15:50:51 PST Path: archiver1.google.com!news1.google.com!newsfeed.stanford.edu!newsfeeds.belnet.be!news.belnet.be!psinet-eu-nl!psiuk-p4!uknet!psiuk-p3!uknet!psiuk-n!news.pace.co.uk!nh.pace.co.uk!not-for-mail From: "Marin David Condic" Newsgroups: comp.lang.ada Subject: Re: functions, packages & characters Date: Fri, 22 Feb 2002 08:54:33 -0500 Organization: Posted on a server owned by Pace Micro Technology plc Message-ID: References: <20020221130715.12738.00000034@mb-bg.aol.com> <3C753C66.8020509@mail.com> NNTP-Posting-Host: dhcp-200-133.miami.pace.co.uk X-Trace: nh.pace.co.uk 1014386073 4999 136.170.200.133 (22 Feb 2002 13:54:33 GMT) X-Complaints-To: newsmaster@news.cam.pace.co.uk NNTP-Posting-Date: 22 Feb 2002 13:54:33 GMT X-Priority: 3 X-MSMail-Priority: Normal X-Newsreader: Microsoft Outlook Express 5.50.4522.1200 X-MimeOLE: Produced By Microsoft MimeOLE V5.50.4522.1200 Xref: archiver1.google.com comp.lang.ada:20271 Date: 2002-02-22T13:54:33+00:00 List-Id: I'm curious about why you think this would be slow? (I may have to build a small chunk of code and time it! :-) In most applications I can think of, you can usually set some kind of arbitrary max sized buffer (the size of "Line" below...) that is going to accommodate most lines in your average text file. (Usually, you have some notion of the kinds of text files you want to process, right?) The presumption is that you might infrequently have to go 'round the loop a second (or third or fourth) time to glom onto the rest of the line, so for 99% (or some high percentage of the time) you're just doing a single Get_Line (is that inefficient?) and a single Append to an unbounded string (again, is that necessarily inefficient?) I've never looked into the underlying implementation of Unbounded_String in any Ada compiler, so I have no clue as to how naturally (in)efficient they may be. I'm guessing a typical implementation is going to be some collection of memory blocks strung together with pointers and some counters. Would "Append" require some huge overhead? MDC -- Marin David Condic Senior Software Engineer Pace Micro Technology Americas www.pacemicro.com Enabling the digital revolution e-Mail: marin.condic@pacemicro.com Web: http://www.mcondic.com/ "Randy Brukardt" wrote in message news:u7bcro6je1h519@corp.supernews.com... > > loop > > Get_Line (Line, Last); > > Append (Buffer, Line (Line'First .. Last)); > > exit when Last < Line'Last; > > end loop; > > This is, of course, the inefficiency that I was commenting on. This > would be slow, and would cause an awful lot of memory allocation and > deallocation, with the possible exception of the first iteration. It > would only be a good idea if the size of Line was larger than almost all > expected lines. >