From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-0.3 required=5.0 tests=BAYES_00, REPLYTO_WITHOUT_TO_CC autolearn=no autolearn_force=no version=3.4.4 Path: eternal-september.org!reader01.eternal-september.org!reader02.eternal-september.org!news.eternal-september.org!mx02.eternal-september.org!feeder.eternal-september.org!aioe.org!.POSTED!not-for-mail From: "Dmitry A. Kazakov" Newsgroups: comp.lang.ada Subject: Re: Size of linked program increasing with new version of GNAT. Date: Wed, 31 Dec 2014 13:45:59 +0100 Organization: cbb software GmbH Message-ID: References: <7f38a07d-3f73-432c-8647-e3a7dcf43637@googlegroups.com> Reply-To: mailbox@dmitry-kazakov.de NNTP-Posting-Host: McwZMXz/TeSqy5+IwQvFxw.user.speranza.aioe.org Mime-Version: 1.0 Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: 8bit X-Complaints-To: abuse@aioe.org User-Agent: 40tude_Dialog/2.0.15.1 X-Notice: Filtered by postfilter v. 0.8.2 Xref: news.eternal-september.org comp.lang.ada:24305 Date: 2014-12-31T13:45:59+01:00 List-Id: On Wed, 31 Dec 2014 04:08:13 -0800 (PST), Jean François Martinez wrote: > If we intend is to measure the actual quality of the compiler at > generating small code then the significtaive variable is the size of the > .o file not the size of the executable. I remember a FORTRAN-IV compiler that generated direct and indirect code. The latter was kind of interpreted, each code "instruction" was a jump to a short subprogram implementing the semantics. The direct code was twice as large than the indirect one and 50% faster. So, no, the size of object files tells nothing. > The executable is parasited by > things like runtime, libraries, dynamic linker (or first stage of dynamic > linker) and so on. The code size can be approximated by a linear function A + B*x, where x is the program "complexity". Arguments that A might be irrelevant is true only if B is correspondingly small compared to the alternatives so that with growing x you could compensate your losses on A. Regarding GNAT, I doubt that B is really small, especially if generics are actively used, and A is substantially big has we all know. > So the more capable your environment and the biggest the program. Which should be the opposite, actually, since you could get some functionality from for the environment. The problem is that the ratio capability/useless-garbage is quite low in modern environments and the trend is that it rapidly nears zero. > So if the program is small and will run it on a half decent box you don't > care. Remember the EEPC, that five years old laptop good for web surfing > and little more? It had 1G of memory. A million Ks. So who cares about > the size of hello world? The empiric law of SW teaches us that any resource available will be consumed. A more specialized law, which is a consequence of the former, is: Windows is here to make i486 out of any i7. > If your program has megs and megs of code then the overhead due to a more > capable runtime will be completely irrelevant respective to the "pure" > (that is .o) size of your program and third party (ie not related to Gnat) > libraries. Which code must be linked dynamically or else, on the systems without virtual memory, loaded into to RAM. Thus large A usually directly translates into longer starting times. The effect we all know, each new system, each new version takes 20% more time to start. The time a program needed to start is a good estimation of the product line age. E.g. compare start times of: Borland C++ MS Visual Studio AdaCore GPS -- Regards, Dmitry A. Kazakov http://www.dmitry-kazakov.de