comp.lang.ada
 help / color / mirror / Atom feed
From: Shark8 <onewingedshark@gmail.com>
Subject: Re: Ada programmers: Edward Fish - interview
Date: Fri, 26 Apr 2019 15:41:03 -0700 (PDT)
Date: 2019-04-26T15:41:03-07:00	[thread overview]
Message-ID: <1a915f9c-2560-4668-9375-764271dfbffb@googlegroups.com> (raw)
In-Reply-To: <5794bba8-6167-4648-9631-732eb0e1a5ac@googlegroups.com>

On Friday, April 26, 2019 at 12:52:46 PM UTC-6, Optikos wrote:
[snip]
> > 
> > So, obviously, the idea isn't new — this invites some questions though:
> >  (a) why didn't any of these take off?
> 
> Two-level grammars did not take off because they incompletely capture the entire semantic meaning of the AARM.

The semantic meaning is the interesting part; arguably the only part that matters — especially as I am of the opinion that text-based source-programs are absolutely the wrong way to address programming.

As I envision a whole tool-suite for Ada, it's not text that should be the "lingua franca", but rather a structured and meaningful form. (eg the intermediate representation.) -- but Byron needn't go that far, as I doubt many people would *want* to get in on a whole Integrated Development Environment.

I get a bit of intuitive feel that limiting the scope of the Byron project to the role of "compiler" (albeit keeping things 'open'/modular enough that it could be used in a real IDE) would invite more people being willing to contribute.

> >  (b) what difficulties did they encounter that may still be extant?
> 
> Two-level grammars are unnatural for human beings to think in.  Other approaches such as Z/Zed travel a different avenue.  Other avenues might exist too.

Perhaps the problem is being tied to text and textual modes of thought, as the term 'grammar' indicates, there is a great deal invested into treating programming languages in a textual-linguistic sense.

> >  (c) how much work would it be to build on any of these as opposed to all-new development?
> 
> Much effort.  But then again, GNAT expended an immense amount of effort doing it twice:
> 1) one transliteration of the prose into an Ada-centric semantically-annotated syntax tree written in Ada language
> then once again:
> 2) one transliteration of the Ada-centric semantically-annotated syntax tree tree-transducer into C/C++-centric semantically-annotated tree written in C.
> 
> It seems that human beings manually transliterating the AARM twice is once too many.

A while back, someone mentioned taking a look at "Alphard: Form and Content" -- http://libgen.io/search.php?req=978-1-4612-5979-4&lg_topic=libgen&open=0&view=simple&res=25&phrase=1&column=def / https://d-m.bksdl.xyz/download/book/5c63f88850b4253978a11370 -- I haven't had a chance to but according to wikipwdia it seems interesting/intriguing and may have some applicability here:
"Its main innovative feature was the introduction of the 'form' datatype, which combines a specification and a procedural (executable) implementation. It also took the generator from IPL-V, as well as the mapping functions from Lisp and made it general case."

> >  (d) what impacts would later standards (95, 2005, 2012, 2020) have on both the extant work as well as the ability for the underlying designs to address it?
> 
> Conversely, which other languages that have analogous features (e.g., covariant OO polymorphism; parameterized types) have similar semantic definition of behavior—including academic research languages?

I'm not sure what you're getting at here. I was entertaining the notion of a sort of executable and verifiable semantic-describing language as being appropriate here. This could be the meta-language described in the tail-end of the interview.

In the sense of the above "meaningful data-structures" perhaps this would be realized as an "execute" method on the nodes of the structure. If the structure is guaranteed to be correct (which we could do on construction) then the execution of these nodes could generate the proper parser/processing-unit.

> >  (e) are there modern techniques that would work better? [Note: "newer" ≠ "better".]
> 
> Two-level grammars seem to lack enough stepwise refinement.

Interesting.
I wonder if this could be addressed via the same technique of "parselets" that is used in this demo/tutorial of Pratt-parsers: http://journal.stuffwithstuff.com/2011/03/19/pratt-parsers-expression-parsing-made-easy/

> What if Byron would take as input a) the AARM transliterated to a machine-readable encoding of its semantics in some stepwise-refinement language and b) your design document transliterated to a machine-readable encoding of its semantics in some stepwise-refinement language, and then the compiler generator stitched the 2 together, not entirely unlike AOP weavers (on steroids).  The question is:  what is that machine-readable encoding of semantics that begets several stages of stepwise refinement via programmatic-transform multistage programming á la MetaOCaml?  Is it Z/Zed on the front-end driving MetaOCaml programs as some sort of automated Vienna development method (VDM)?

Hm, an interesting idea.
Though, I am pretty unfamiliar with AOP weavers or Vienna development method. How would these tie in together?

> Or is it TRW's SEMANOL?  Would SEMANOL+Ada sufficiently approximate Z+MetaOCaml as a stepwise-refinement multistage-programming environment for generator of code generator of … of code generator?

I have no idea; other than that abstract/paper I don't think I've seen Semanol referred to anywhere. 

> The goal of SEMANOL+Ada or Z+MetaOCaml would be to generate the existing Byron source code (as touched up as per the revised design document).

I don't know; the extant code-base isn't the greatest, and is all by-hand: mine.

> Once the generation of syntactic parser was demonstrated, generation of more-sophisticated semantics in fresh Byron source code naturally follow. 

Does it?
I'm not convinced: if we were to start/focus on capturing the semantics and rules, then I think the syntactic would follow easily, but the other way might me more difficult. Just as the write method on a stream is much easier than a read -- example: (23, Dave, 17) can be easily put in a stream as 23Dave17, but there's a problem if the data written is (21,14,13) as it produces 211413 which is itself a proper integer.

> >  (f) what *is* the best way to engineer/design a compiler for Ada?
> 
> The best way would be one that transliterates the AARM into a machine-readable semantic language (once) for then fully-automated derivation of the compiler.  I would claim that some of the “numerous passes” performed by GNAT are actually invariants that can be factored out to compiler-authoring-time instead of program-parsing-time.

I would be unsurprised if you're right about that.

> Another definition of the best way would be whatever attracts people and/or funding to pioneer something new and innovative, instead of me-too and imitative.

And there's the sticking point: absolutely none of my non-employment projects since graduation have had any other contributors. (I'm certainly no Torvalds.)

  reply	other threads:[~2019-04-26 22:41 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-04-10  7:47 Ada programmers: Edward Fish - interview Tomek Wałkuski
2019-04-26  3:17 ` Optikos
2019-04-26 16:20   ` Shark8
2019-04-26 18:52     ` Optikos
2019-04-26 22:41       ` Shark8 [this message]
2019-04-27 18:37         ` Optikos
2019-05-09 22:02         ` Optikos
2019-05-10  8:06           ` Simon Wright
2019-05-10  8:50             ` tranngocduong
2019-05-12 18:25               ` Lucretia
2019-05-10 15:32             ` Optikos
2019-05-12 14:46           ` Optikos
replies disabled

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox