From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-0.8 required=5.0 tests=BAYES_00,INVALID_DATE autolearn=no autolearn_force=no version=3.4.4 Relay-Version: version B 2.10 5/3/83; site utzoo.UUCP Path: utzoo!watmath!clyde!caip!nike!ucbcad!ucbvax!SEI.CMU.EDU!firth From: firth@SEI.CMU.EDU Newsgroups: net.lang.ada Subject: implementing USE clauses Message-ID: <8608081422.AA08967@bd.sei.cmu.edu> Date: Fri, 8-Aug-86 10:22:46 EDT Article-I.D.: bd.8608081422.AA08967 Posted: Fri Aug 8 10:22:46 1986 Date-Received: Sat, 9-Aug-86 11:11:36 EDT Sender: daemon@ucbvax.BERKELEY.EDU Organization: The ARPA Internet List-Id: David Lakin raised an interesting question about Ada 'use' clauses, namely, when you are compiling something like with HUMUNGOUS_MATHEMATICAL_LIBRARY; use ditto; package TINY is X : REAL := PI; end TINY; is there a component of the compilation time that is proportional to the size of that HUMUNGOUS math lib? Naturally, that depends on the implementation. The simplest implementation seems to be for the compiler to go to the Ada source code of the Math lib spec, reread it, reparse it, and so on. Are there really compilers that do that? Sure there are! C compilers do that all the time; so do Fortran compilers, PL/I compilers and lots more. Of course, the program source code says INCLUDE HUMUNGOUS_MATHEMATICAL_LIBRARY and the programmer knows full well that she is asking for, and getting, source file inclusion. This is a readonably cheap way of adding a separate compilation facility of a sort to a language that otherwise would lack it. And users of C, BCPL, &c know that it really does work, provided you take care with file control, recompilation control, and so on. But it costs. The cost of compiling any unit includes the cost of recomnpiling all referenced units. Most of us believe that compilation should be reasonably fast, even in the presence of a referenced "environment". (I happen to be an eccentric who believes that compilation should be unreasonably fast, but no matter) So to get away with the above implementation, the compilation of THINGS THAT ARE LIKELY TO BE FOUND IN 'INCLUDE' FILES should be very fast indeed. Simple languages typically put constant declarations and macros in such files, so a decent macroprocessor is a help. Ada typically puts declarations in used specs, so the compiler should be very very good at reading in declarations - and I can't see that as a feasible objective, given the complexity of Ada declarations. Moreover, one might expect Ada style to make more extensive use of large library packages. How do we speed up the processing of use clauses? One way is to store specs in some predigested form, eg an attributed parse tree. The overhead is still linear (we hope), but the linear coefficient will be much smaller, and it is a judgement call whether this is acceptable given the likely size of Ada programs and package specs. Note that if the tree is stored using self-relative pointers, no relocation is necessary when it is fetched into main store: that's another trade-off you can look at. But consider this. If you can afford to bring an entire withed spec into your compiler data area, you MUST be using virtual memory. So what you are actually doing is replacing one disc representation (the encoded semicompiled file) with another (the paged compiler data area). It is conceptually far simpler to reference the former data area directly, by in effect "demand load" of with'd bits as they are needed. I believe this is indeed the answer, and perhaps if compiler intermediate representations were designed by data-base folk rather than programming-language folk, we'd see that answer more often. I'd like to discuss the problems it raises, but they are not really relevant to Ada, and this post is already long enough. Robert Firth