From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,99222a5bd46ef3c9 X-Google-Attributes: gid103376,public From: gelleric@kafka.informatik.uni-stuttgart.de (Wolfgang Gellerich) Subject: Re: GOTO considered necessary (reworked) Date: 1997/06/25 Message-ID: <5oqu9i$s22@zdi.informatik.uni-stuttgart.de>#1/1 X-Deja-AN: 252446588 Distribution: world Sender: gelleric@kafka (Wolfgang Gellerich) References: <5nn2fm$11dk$1@prime.imagin.net> <33A58C79.1C7A@sprintmail.com> Organization: Universit�t Stuttgart Newsgroups: comp.lang.ada Date: 1997-06-25T00:00:00+00:00 List-Id: In article , bobduff@world.std.com (Robert A Duff) writes: (...) |> In my experience, a hand-coded lexical analyzer is always better done |> with loops and ifs and case statements, rather than as above. That is, |> forget that it's a FSM, and just write it in the natural way. ("An |> identifier is a letter, followed by a sequence of zero or more letters |> and digits" translates quite nicely into loops, not gotos.) I know you |> agree, since I've seen the GNAT lexer. (A tool-generated lexer is a |> whole 'nother story, of course -- throw readability to the winds, and go |> for efficiency. Funny thing is, tool-generated lexers are less |> efficient, in practise, in my experience.) Definitely. We recently made a study comparing different methods how to implement an Ada 95 scanner and then measured the time they needed to lexical analyse some large Ada sources. The slowest and the fastest scanners differed by factor 74 (!) and the fastest one was hand-coded scanner, using no GOTOs and basically following the strategy described above. ------ Wolfgang