From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=0.7 required=5.0 tests=BAYES_00,MSGID_RANDY autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,6784fd29e43ab726 X-Google-Attributes: gid103376,public X-Google-ArrivalTime: 2001-01-22 09:40:14 PST Path: supernews.google.com!sn-xit-02!supernews.com!isdnet!209.249.123.233.MISMATCH!xfer10.netnews.com!netnews.com!newspeer.monmouth.com!news.maxwell.syr.edu!nntp2.deja.com!nnrp1.deja.com!not-for-mail From: Robert Dewar Newsgroups: comp.lang.ada Subject: Re: Ada 95 grammar for aflex? Date: Mon, 22 Jan 2001 17:32:39 GMT Organization: Deja.com Message-ID: <94hqrn$s8s$1@nnrp1.deja.com> References: <3A6703BD.5D78D62@informatik.uni-stuttgart.de> <949qf7$l43$1@nnrp1.deja.com> <3A68877B.C7AF95D2@uol.com.br> <94c7eq$i89$1@nnrp1.deja.com> <3A69BA8F.A26A531E@uol.com.br> <94comf$vf7$1@nnrp1.deja.com> <3A6A418E.80F1ADC@uol.com.br> NNTP-Posting-Host: 205.232.38.14 X-Article-Creation-Date: Mon Jan 22 17:32:39 2001 GMT X-Http-User-Agent: Mozilla/4.61 [en] (OS/2; U) X-Http-Proxy: 1.0 x59.deja.com:80 (Squid/1.1.22) for client 205.232.38.14 X-MyDeja-Info: XMYDJUIDrobert_dewar Xref: supernews.google.com comp.lang.ada:4324 Date: 2001-01-22T17:32:39+00:00 List-Id: In article , "Ira D. Baxter" wrote: > Even if you are interested in error recovery, you can build > good error recovery in a table driven parser. This is arguable. Here's a specific challenge. In the context of a bottom up table driven parser, implement the error recovery that GNAT does for semicolon used in place of IS and vice versa (this is a particularly tough one). Reall the issue is good vs excellent :-) > That's impressively fast, but it depends on what you mean by > "parsing". I mean determining whether the given string is in the language and giving a semi-decent error msg if it is not. > Our automatically generated > parsers automatically capture identifier > and other kinds of strings and place them in string > dictionaries, > carry out integer and floating point value conversions, > produce abstract syntax trees, > capture and attach comments and source line information > to AST nodes, etc. This would indeed slow things down (by probably a factor of 2 at most), so we would still be at millions of lines per minute. > Its still OK. Even with all > extra work, and not a lot of optimization, we manage > to generate parsers that do several thousand lines per second > (180K lines/minute). We assume with hard work > (the kind you do to produce "assembly langauge parsers"), > we can generate parsers that are twice as fast. Right, so that is perhaps a factor of 10-100 away from what I can achieve in my asm program, which is what I would expect. It is definitely the case that parsing efficiency is not critical (or should not be :-), so the issue driving table driven vs hand written parsers is indeed dealing with error detection and recovery in an excellent manner! Sent via Deja.com http://www.deja.com/