comp.lang.ada
 help / color / mirror / Atom feed
* Token package update
@ 1999-01-22  0:00 dennison
  1999-01-31  0:00 ` Nick Roberts
  0 siblings, 1 reply; 3+ messages in thread
From: dennison @ 1999-01-22  0:00 UTC (permalink / raw)


Success!

My management has consented to allow me to release the token analysis
packages I developed here as open source. It will probably still take a while
before I get around to releasing them. I need figure out what licensing terms
to use, change the file headers, and generally make sure they are somewhat
fit for public consumption. I suppose I will also need to find a place to put
them...

It looks like I may be able to convince UCF to let me use work on this as part
of my master's thesis as well.

For anyone who may have useful suggestions or insight into this kind of
process, I'd like to hear it.

For those of you who aren't sure what I'm talking about, following is a repost
of a message about the packages a while back.

--
T.E.D.

In article <74g8hm$55h$1@nnrp1.dejanews.com>,
  armand0@my-dejanews.com wrote:
> Hi ,
>
> 1.Does anyone know about some parser generator (kind of yacc+lex)
> that produces code in ADA instead of C? In the same way as
> yacc, I'd like to produce from a BNF description of some my
> language and some semantic actions associated to each rule
> (written in ADA), a LR parser.
> It could also be a higher level tool that inputs some BNF (as is) and
> produces input to yacc+lex.

How about a lower-level solution?

I have a set of Ada packages I developed in-house here to perform lexical
analysis within Ada. Its not a pre-compiler like lex, but an object-oriented
set of packages which perform token analysis for you based on an input syntax
which you define. The syntax consists of an enumerated type for the tokens,
and a mapping between those tokens and "recognition" functions. Along with it
are canned recognition functions for most common token types (identifiers,
keywords, floats, integers, newlines, comments, end-of-file, etc.). You can
use inheritance to define your own recognizers if you have to. But the beauty
of it is that the canned recognizers allow you to create a lexical analyizer
with much less work that you can with lex. It uses a similar algorithm to lex
internally, so it should (in theory) be just as fast. I believe its portable;
it works with ObjectAda and GreenHills without modification.

I'm bringing this up to see if there would be any interest from the community
in it, were it to be released as OSS. I've quickly grown fairly dependent on
it, so I'd like to see it out in the open where it can be better supported
and tested. But I'm frankly a bit daunted at the prospect of bringing this up
with management here, so I'd like to get a feel for if it would be worth the
effort.

I'm sorry but I don't yet have an analagous facility for parsing. My parsing
needs here are fairly simple (mainly configuration files), so it wouldn't
have been worth the development time to build reusable parsing packages. But
it would make a good research project for my graduate work (or someone
else's)...

--
T.E.D.


-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    




^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~1999-02-02  0:00 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
1999-01-22  0:00 Token package update dennison
1999-01-31  0:00 ` Nick Roberts
1999-02-02  0:00   ` dennison

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox