From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=unavailable autolearn_force=no version=3.4.4 Path: border1.nntp.dca1.giganews.com!nntp.giganews.com!goblin2!goblin.stu.neva.ru!reality.xs3.de!news.jacob-sparre.dk!loke.jacob-sparre.dk!pnx.dk!.POSTED!not-for-mail From: "Randy Brukardt" Newsgroups: comp.lang.ada Subject: Re: [Slightly OT] How to process lightweight text markup languages? Date: Tue, 20 Jan 2015 16:00:10 -0600 Organization: Jacob Sparre Andersen Research & Innovation Message-ID: References: <1wclq766iu82d.b0k1hx30rgrt.dlg@40tude.net> NNTP-Posting-Host: rrsoftware.com X-Trace: loke.gir.dk 1421791211 9302 24.196.82.226 (20 Jan 2015 22:00:11 GMT) X-Complaints-To: news@jacob-sparre.dk NNTP-Posting-Date: Tue, 20 Jan 2015 22:00:11 +0000 (UTC) X-Priority: 3 X-MSMail-Priority: Normal X-Newsreader: Microsoft Outlook Express 6.00.2900.5931 X-RFC2646: Format=Flowed; Original X-MimeOLE: Produced By Microsoft MimeOLE V6.00.2900.6157 Xref: number.nntp.giganews.com comp.lang.ada:191954 Date: 2015-01-20T16:00:10-06:00 List-Id: "Dmitry A. Kazakov" wrote in message news:1wclq766iu82d.b0k1hx30rgrt.dlg@40tude.net... > On Tue, 20 Jan 2015 18:47:13 +0000 (UTC), Natasha Kerensikova wrote: > >> On 2015-01-18, Dmitry A. Kazakov wrote: >>> On Sun, 18 Jan 2015 18:04:08 +0000 (UTC), Natasha Kerensikova wrote: >>> >>> [...] >>> >>>> My latest attempts involve keeping the online architecture with >>>> separate >>>> input and output types and streams, and keeping a stack of currently >>>> opened constructs, with a dynamically dispatching ending test on each >>>> character for each construct on the stack. It feels horribly >>>> inefficient and complicated. >>> >>> Nothing complicated and most efficient in the sense that depending on >>> the >>> formal language classification you could not be able eliminate the stack >>> (memory) in some or other form. You could think of it in terms of >>> possible >>> states of the parser. If the number of states is infinite you must have >>> stack or else the parser itself must be infinite. Simple example is >>> parsing >>> the language of balanced brackets: ()(()). >> >> Well it feels insurmountably complicated to me, that's why I posted in >> first place -- to be enlightened. > > Nothing complicated to me, so far. > >> What I still can't make fit with what I know is how to deal >> simultaneously with the precedence and the "implicit escaping", which is >> further mudded by the interpretation of what is in the constructs >> depends on the particular current construct. >> >> To put it in a grammar-like way (even though I doubt the considered >> language has a grammar), I would have something like: >> >> formatted-text ::= code-fragment | link | ... | literal-text >> code-fragment ::= '`' literal-text '`' >> link ::= '[' formatted-text ']' '(' url [ link-title ] ')' >> link-title ::= '"' literal-text '"' > > [] - brackets > () - brackets > `` - quotation marks > "" - quotation marks > >> So if you remember my example, >> >> [alpha`](http://example.com/)`](http://other.com) > > Since `` are quotation marks, the above should be: > > + > |_ [] > | |_ + > | |_ alpha > | |_ ](http://example.com/) > |_ () > |_ http://other.com > > + is an assumed infix catenation operation. No backtracking needed. > > [...] >> Am I right so far? Am I missing something? > > Distinguishing lexical and syntactical elements? You don't bother with > operators until expression terms (lexemes) matched. Once you matched them > you never return back. They are all on the stack if not already bound by > an > operation. If `` is declared literal, it is a term of the expression, > atomically matched. It naturally takes precedence over anything else. I agree with Dmitry; "standard" parsing has two stages, lexing (converting into a token stream) and parsing. You're trying to make due with only one, which complicates the problem a lot for no reason. Also note that an LR parser acts similarly to your "parsing all possibilities at once". The parser state encodes all of the possibilities at the current point in the parsing, so it generally can handle quite a bit of complication. LR parsers are usually generated by a tool, and thus if there is not a unique solution, that's determined at the time of parser generation. As Dmitry says, such parsers (like the one we use in Janus/Ada) make it more challenging to deal with error correction (the parser generator we use originated as a research project into automated error correction -- we don't use that error correction, which tells you how well that worked :-), but they can be quite small and very fast (especially on larger grammars like Ada's). Randy. Randy.