From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-0.3 required=5.0 tests=BAYES_00, REPLYTO_WITHOUT_TO_CC autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,99ab4bb580fc34cd X-Google-Attributes: gid103376,public From: ncohen@watson.ibm.com (Norman H. Cohen) Subject: Re: Q: access to subprogram Date: 1996/07/14 Message-ID: <4sbboe$ful@watnews1.watson.ibm.com> X-Deja-AN: 168822346 distribution: world references: <4rb9dp$qe6@news1.delphi.com> <4rre7j$pv7@watnews1.watson.ibm.com> organization: IBM T.J. Watson Research Center reply-to: ncohen@watson.ibm.com newsgroups: comp.lang.ada Date: 1996-07-14T00:00:00+00:00 List-Id: In article , bobduff@world.std.com (Robert A Duff) writes: |> One can represent closures explicitly using heap-allocated tagged |> objects. A closure in a language that supports full closures is |> represented by the address of the function, plus some representation of |> the environment in which that function is called. And this stuff has to |> be allocated in the heap, since it can live essentially forever. Correct. The usual implementation of the stack in a "retention-strategy" language (so called because the environment of a call may be retained even after that call returns) is as a linked list of heap-allocated activation records. Pushing a record onto the stack consists of setting its link field to point to the old top record and setting the top pointer to point to the new record; popping the stack consists of setting the top pointer to point to what is currently pointed to by the link field of the top record. If the environment pointed to by the top record did not become part of any closure, this causes the record to become inaccessible, and eventually it gets garbage-collected. Otherwise, since some live object is still pointing to the old top record, it remains in existence. (Association lists work this way in LISP, and full closures are constructed by calling FUNCTION, which returns a list consisting of the special atom FUNARG, the A-list for the environment in which FUNCTION was evaluated, and the S-expression that was passed to FUNCTION.) |> One |> can do the same thing *explicitly* in Ada, by putting the "environment" |> stuff into an object, and allocating it on the heap. I show a different |> version of package body Ackermann_Computation below, that works this |> way. |> |> Doing this requires the data that's in the closure to be explicitly |> marked as such (by being components of the tagged type(s)). In Norm's |> "true closures" solution, it's implicit -- that is, whatever Norm's |> Next_Row function happens to reference is what needs to be packaged up |> in the closure. Is there some advantage to that implicitness? Does it |> make things more substantially more convenient in some way that I don't |> understand? It seems to me that it's an *advantage* to make things |> explicit -- it seems nice that a local variable (or parameter) in Ada |> doesn't live longer than the containing scope -- it makes it easier to |> understand what the purpose of that local is. And if you want it to |> live longer, you *explicitly* put it in the heap. Whether the explicitness is a blessing or a curse depends on the level of abstraction you are used to. I presume that in typical circumstances you find the implicit manipulation of registers in A := B + C; preferable to the explicit manipulation of registers in LOAD R1, B LOAD R2, C ADD R3,R1,R2 STORE R3,A Similarly, for a programmer in a functional language such as ML, closures and higher-order functions are the usual language of discourse, the lowest level of abstraction. An ML programmer does not want to be bothered with the low-level details of implementing closures. The purpose of adding full closures to Ada would be to allow programming at that same very high level of abstraction. (This was not, of course, part of either the Ada-83 or Ada-95 requirements.) |> The solution below explicitly creates heap objects representing |> closures, whereas Norman's solution implicitly allocates some stack |> frames in the heap. Bob's solution is similar to one I created several years ago when experimenting with the use of Ada-83 tasks to represent function manipulable as first class objects. The task type had an Apply entry with an in parameter giving the function argument and an out parameter giving the function result. There were other entries to "construct" such a function. That is, calling the entry would place the task in a state where future calls on the Apply entry would cause it to behave like one function or another. One of the entries caused the task to move into a loop where Apply acted like the R[0] function, setting its out parameter to one more than the value of its in parameter. Another entry, with a pointer to a task object of the same type, presumably representing some Ackermann "row" R[i], would cause the task to move into a loop where Apply acted like the R[i+1] function, using its in parameter to control how many times the R[i] function would be applied to 1, and setting the out parameter to the result. (It would have been nice to have task types where different task objects had different task bodies implementing the same interface, but that's another discussion for another day.) |> What's the most *efficient* method for calculating Ackermann's? No doubt to special-case the lower rows: function Ackermann (M, N: Unbounded_Natural) return Unbounded_Natural is Counter : Unbounded_Natural; Result : Unbounded_Natural; begin if M = 0 then return N+1; elsif M = 1 then return 2*N+3; elsif M = 2 then return 2**(N+2) - 3; else Counter := N + 1; Result := 1; while Counter /= 0 loop Result := Ackermann(M-1, Result); Counter := Counter - 1; end loop; endif; end Ackermann; Note that Ackermann(3,n) is (if I didn't make a clerical error) 3 2 - 1 . . . 2 - 1 2 - 1 2 - 3 where the number of 2s is n. Thus even using some special representation for unbounded integers, we quickly exhaust all the storage in the address space to represent the result. Thus we need not worry about efficiency past this point; the computation is intractable. -- Norman H. Cohen ncohen@watson.ibm.com