From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,b8cf35a2055dbfd9,start X-Google-Attributes: gid103376,public From: dennison@telepath.com Subject: Token package update Date: 1999/01/22 Message-ID: <78b2nl$poj$1@nnrp1.dejanews.com>#1/1 X-Deja-AN: 435791127 X-Http-Proxy: 1.0 x4.dejanews.com:80 (Squid/1.1.22) for client 204.48.27.130 Organization: Deja News - The Leader in Internet Discussion X-Article-Creation-Date: Fri Jan 22 23:48:07 1999 GMT Newsgroups: comp.lang.ada X-Http-User-Agent: Mozilla/4.5 [en] (WinNT; I) Date: 1999-01-22T00:00:00+00:00 List-Id: Success! My management has consented to allow me to release the token analysis packages I developed here as open source. It will probably still take a while before I get around to releasing them. I need figure out what licensing terms to use, change the file headers, and generally make sure they are somewhat fit for public consumption. I suppose I will also need to find a place to put them... It looks like I may be able to convince UCF to let me use work on this as part of my master's thesis as well. For anyone who may have useful suggestions or insight into this kind of process, I'd like to hear it. For those of you who aren't sure what I'm talking about, following is a repost of a message about the packages a while back. -- T.E.D. In article <74g8hm$55h$1@nnrp1.dejanews.com>, armand0@my-dejanews.com wrote: > Hi , > > 1.Does anyone know about some parser generator (kind of yacc+lex) > that produces code in ADA instead of C? In the same way as > yacc, I'd like to produce from a BNF description of some my > language and some semantic actions associated to each rule > (written in ADA), a LR parser. > It could also be a higher level tool that inputs some BNF (as is) and > produces input to yacc+lex. How about a lower-level solution? I have a set of Ada packages I developed in-house here to perform lexical analysis within Ada. Its not a pre-compiler like lex, but an object-oriented set of packages which perform token analysis for you based on an input syntax which you define. The syntax consists of an enumerated type for the tokens, and a mapping between those tokens and "recognition" functions. Along with it are canned recognition functions for most common token types (identifiers, keywords, floats, integers, newlines, comments, end-of-file, etc.). You can use inheritance to define your own recognizers if you have to. But the beauty of it is that the canned recognizers allow you to create a lexical analyizer with much less work that you can with lex. It uses a similar algorithm to lex internally, so it should (in theory) be just as fast. I believe its portable; it works with ObjectAda and GreenHills without modification. I'm bringing this up to see if there would be any interest from the community in it, were it to be released as OSS. I've quickly grown fairly dependent on it, so I'd like to see it out in the open where it can be better supported and tested. But I'm frankly a bit daunted at the prospect of bringing this up with management here, so I'd like to get a feel for if it would be worth the effort. I'm sorry but I don't yet have an analagous facility for parsing. My parsing needs here are fairly simple (mainly configuration files), so it wouldn't have been worth the development time to build reusable parsing packages. But it would make a good research project for my graduate work (or someone else's)... -- T.E.D. -----------== Posted via Deja News, The Discussion Network ==---------- http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own