From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,53ca16c587912bce X-Google-Attributes: gid103376,public From: dewar@merv.cs.nyu.edu (Robert Dewar) Subject: Re: Source files organisation using gnat Date: 1997/07/05 Message-ID: #1/1 X-Deja-AN: 254900535 References: <19970630185901.OAA27670@ladder02.news.aol.com> Organization: New York University Newsgroups: comp.lang.ada Date: 1997-07-05T00:00:00+00:00 List-Id: Michel says <> People often guess this, but that is all it is, a guess! And in fact it is almost certainly an incorrect guess. The advantage of the source model, which you may note is also adopted by the AdaMagic front ends, is that you do not have to write the tree when you do a compilation, this definitely saves time, since the tree structures are big (Ada 83 libraries could quickly get very large). As to "useless recompilations", actually nothing is uselessly recompiled. The internal semantic tree of withed units is reconstructed from source, but this is nothing like a full compilation, since the expensive part, namely code generation is always skipped for with'ed units. The question of whether to save information rather than recompute it is a rather general one. You definitely cannot assume that it is always more efficient to save than to recompute. There are certainly several approaches for significantly increasing GNAT compilation speed by a large factor, but none of these involve writing libraries. So far, speeding up compilation has been a lower priority task. As the compiler gets into more solid shape, we can spend more resources speeding up the performance of the generated code, the runtime, and the compiler itself. But don't look for us to switch to a library model, since this would actually *slow down* compilation, and this is even more true when some of the speed ups we do plan get done. In particular, the reading of with'ed files is a front end activity, and so far the distributed versions of GNAT have been the full debugging versions, with all sorts of very inefficient assertions around (just take a look at the sources and grep for assert :-) Now that the front end is pretty stable, we are experimenting to see if it is time to generate the originally planned fast inlined version of the front end. We are not sure that this will be a feature of the forthcoming 3.10 release, but it maybe, and if not it certainly will be in the subsequent release. Actually most people who have run into slow compilation times in GNAt are dealing with cases where the GCC back end is slow., and the issue of library vs recreating from source is of course entirely irrelevant to such problems. For most, but not all, users, the compilation performance of GNAT, even with the full assertions on, is acceptable. Certainly that is the way I always run, and it takes me 13 minutes to compile the GNAT sources, and under 5 minutes to recompile the GNAT library (nearly 300 units), and that's fast enough that I don't bother to build the fast compiler yet.