From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=unavailable autolearn_force=no version=3.4.4 Path: eternal-september.org!reader01.eternal-september.org!reader02.eternal-september.org!news.eternal-september.org!mx02.eternal-september.org!.POSTED!not-for-mail From: Natasha Kerensikova Newsgroups: comp.lang.ada Subject: Re: [Slightly OT] How to process lightweight text markup languages? Date: Tue, 20 Jan 2015 18:47:13 +0000 (UTC) Organization: A noiseless patient Spider Message-ID: References: Injection-Date: Tue, 20 Jan 2015 18:47:13 +0000 (UTC) Injection-Info: mx02.eternal-september.org; posting-host="eab84d932a0f4c9d4606240766f0f5e7"; logging-data="32530"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/EhRHUsC9lUGJOpNnE4dWN" User-Agent: slrn/1.0.2 (FreeBSD) Cancel-Lock: sha1:M9PGbYUjZx4sBGlVljpCi5veiWI= Xref: news.eternal-september.org comp.lang.ada:24633 Date: 2015-01-20T18:47:13+00:00 List-Id: On 2015-01-18, Dmitry A. Kazakov wrote: > On Sun, 18 Jan 2015 18:04:08 +0000 (UTC), Natasha Kerensikova wrote: > > [...] > >> My latest attempts involve keeping the online architecture with separate >> input and output types and streams, and keeping a stack of currently >> opened constructs, with a dynamically dispatching ending test on each >> character for each construct on the stack. It feels horribly >> inefficient and complicated. > > Nothing complicated and most efficient in the sense that depending on the > formal language classification you could not be able eliminate the stack > (memory) in some or other form. You could think of it in terms of possible > states of the parser. If the number of states is infinite you must have > stack or else the parser itself must be infinite. Simple example is parsing > the language of balanced brackets: ()(()). Well it feels insurmountably complicated to me, that's why I posted in first place -- to be enlightened. What I still can't make fit with what I know is how to deal simultaneously with the precedence and the "implicit escaping", which is further mudded by the interpretation of what is in the constructs depends on the particular current construct. To put it in a grammar-like way (even though I doubt the considered language has a grammar), I would have something like: formatted-text ::= code-fragment | link | ... | literal-text code-fragment ::= '`' literal-text '`' link ::= '[' formatted-text ']' '(' url [ link-title ] ')' link-title ::= '"' literal-text '"' So if you remember my example, [alpha`](http://example.com/)`](http://other.com) the part "](http://example.com/)" matches the end of a link, but only in a formatted-text context, not inside a code fragment. There for, considering an online algorithm, when the part "[alpha`](http://example.com/)" has been read, there is no way to decide whether it's a link or not, until the closing backtick is encountered, or there is proof there is no closing backtick by encountering the end of the text (or of the enclosing even-higher-precedence structure). This led me to the conclusion that the only solutions are backtracking and simultaneously parsing all available possibilities (i.e. using a nondeterministic automaton). Considering the current state of computing, I should probably go for backtracking. Basic backtracking would be pushing the current state when encountering an opening marker, and if the end is reached with a non-empty stack, pop the top-most state, replace the opening marker by the literal sequence of its representation, and restart parsing. If I'm not mistaken, adding precedences to the mix would just change the meaning of "end is reached" in the previous paragraph: it would not only mean the end of input, but also the ending marker of any higher-precedence construct currently in the stack. Am I right so far? Am I missing something? Thanks for your help, Natasha