From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,99ab4bb580fc34cd X-Google-Attributes: gid103376,public From: ok@goanna.cs.rmit.edu.au (Richard A. O'Keefe) Subject: Re: Q: access to subprogram Date: 1996/07/29 Message-ID: <4th822$pq6@goanna.cs.rmit.edu.au> X-Deja-AN: 170948695 references: <4rb9dp$qe6@news1.delphi.com> organization: Comp Sci, RMIT, Melbourne, Australia nntp-posting-user: ok newsgroups: comp.lang.ada Date: 1996-07-29T00:00:00+00:00 List-Id: ok@goanna.cs.rmit.edu.au (Richard A. O'Keefe) writes: > I spent this afternoon (when not talking to students) taking the > integration example and beefing it up to the point where it would > plausibly take a reasonable amount of time. I produced three > versions: [now four, using two C compilers] Lang Time Size How compiled Ada 18 94 3.04 gnatmake -gnatp -cargs -O4 Pascal 18 57 SPARCompiler 4.0 pc -s -native -O4 C 20 92 2.7.2 gcc -O4 C 13 92 SPARCompiler 4.0 cc -fsimple -native -fast -xO4 Scheme 6 40 stalin -copt "-O4" -d -On -dc Scheme 5 40 stalin -cc cc -copt "-native -xO4" -d -On Seconds Lines [All of these were double precision; single precision trimmed 1 second off the time for C and Pascal, made no apparent change for Ada, and had yet to be tried at the time of reposting for Scheme.] rogoff@sccm.Stanford.EDU (Brian Rogoff) writes: >Note that the Pascal compiler was not any better than GNAT on this. The C code simulates a model where top level procedures do not have a static link, but nested procedures do (a realistic implementation model). The C times bracket the Ada/Pascal times. The really interesting thing was that the Ada, Pascal, and Scheme versions worked the first time they got through the compiler, while the C version crashed. (lclint pointed me to an uninitialised variable...) >The data >does suggest that the nested procedure approach may have performance >advantages in addition to its general advantages as a simple programming >construct. What is stalin doing? Is it dependent on any simplifying >assumptions about the code? The -d option says "inexact numbers are C doubles". The -On option says "do not check for exact integer overflow", which I put in to make the comparison fair. The -dc option says "yes you can use alloca"; as you see from the last line, leaving it out didn't make much difference. Does Stalin make any simplifying assumptions about the code? Well, it doesn't support bignums or rationals (neither do Ada, Pascal, or C). It doesn't support complex arithmetic (neither does C, and neither does SPARCompiler Pascal, even though it's in the current standard.) It doesn't support DELAY/FORCE (nor do Ada, Pascal, or C). It doesn't support LOAD (adding new code at run time, or replacing existing code). Neither do Ada, Pascal, or C. Stalin makes *no* other assumptions about the program. Even CAR and CDR receive no special magic treatment. It just does a really thorough job of static analysis and optimisation. Jeff Siskind has put a *lot* of work into the algorithms in this compiler. It's a one-man show; he is the only one working on it, and his latest grant proposal to do more work on it was turned down, so it is also by a rather large margin the slowest of the compilers tested. This is a pity, because while a lot of the methods in Stalin are related to dynamically typed languages, "Stalin also does global static life-time analysis for all allocated data. This allows much temporary allocated storage to be reclaimed without garbage collection. Finally, Stalin has very efficient strategies for compiling closures." Of course, when you introduce OOP, dynamic typing joins the party, so it's not that clear that Stalin has nothing to contribute to the art of Ada compilation. >A shared generic passes in some way a pointer to a data area identifying the >generic, and containing its local objects. I see. This is like "type classes" as found in Mercury and Haskell. I note that Haskell manages to combine this with unrestricted closures; I don't know how the Glasgow Haskell Compiler handles it, but I don't _think_ it involves run-time code generation. >The result that that eliminating the restrictions on 'Access would make some >code easy to write, but would force the junking of existing code sharing >algorithms, or make the cost of access-to-subprogram substantially higher than >in competing languages. Those both are very bad ideas. Ah *hah*. This is *not* an argument against procedure parameters, it is an argument against identifying them with procedure *pointers*. The force of the integration example is that even though current Ada "has" procedure parameters they can't be *used* for the things that people have historically wanted to use them for, Now I understand why it was thought that the maximum size of the display mattered: people wanted to know how much stuff to put in a procedure pointer, and a procedure pointer of a type declared at lexlevel N needs N+c pointers, and a pointer declared who-knows-where needs who-knows-how-many pointers. Fine, but NONE of this is a problem with procedure parameters, only with bodging them up Borland-style as procedure pointers. >How much are upward funargs used in day to day functional programming? I >think Bob Duff is still looking for a compelling example of why they are >so much better than the explicitly simulated closure via an instance of a >tagged type. Whenever you simulate any mechanism, - you lose clarity This is demonstrated by the line numbers. - you introduce the risk of error The Scheme and Pascal versions didn't have to "simulate" anything. They worked, as soon as the compiler was happy with them. The C version crashed, precisely *because* having to simulate the feature added complexity, and I slipped, even though I was transcribing code, not rewriting from scratch. All I had to worry about was how to simulate the feature, in < 100 lines of code, and I got it wrong. - you lose efficiency. The compiler doesn't understand the feature. It compiles your simulation using general methods. In this example, even when I told the SPARCompiler C compiler (there is no redundancy there; that's it's name) to in-line some crucial functions, it didn't improve the speed. The fact that Stalin really thoroughly understands procedures does pay off in this small example where procedure calling is a major cost factor. One might as well say, "why have direct language support for OOP? You can always simulate it!" And in fact Scheme programmers have often said this, because a language that takes procedures really seriously makes it dead simple to simulate OOP with procedures. -- Fifty years of programming language research, and we end up with C++ ??? Richard A. O'Keefe; http://www.cs.rmit.edu.au/~ok; RMIT Comp.Sci.