From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: fac41,c52c30d32b866eae X-Google-Attributes: gidfac41,public X-Google-Thread: 1108a1,c52c30d32b866eae X-Google-Attributes: gid1108a1,public X-Google-Thread: 103376,2ea02452876a15e1 X-Google-Attributes: gid103376,public From: jhd@herold.franken.de (Joachim Durchholz) Subject: Re: Real OO Date: 1996/03/29 Message-ID: <65lDeVZF3RB@herold.franken.de> X-Deja-AN: 145074342 distribution: world references: <4id031$cf9@dayuc.dayton.saic.com> newsgroups: comp.lang.eiffel,comp.lang.ada,comp.object Date: 1996-03-29T00:00:00+00:00 List-Id: Gosh, this thread is becoming quite verbose. And I'm losing some ground... but let's see what remains of my points ;) . > Typically, a programmer comes upon this knowledge (that the dynamic type > is exactly the specific type) because it is an obvious consequence of the > algorithm, not because he has performed a sophsiticated data flow > analysis by hand! If an Ada programmer is mistaken, he will be told so > by the failure of a tag check, at precisely the site of the attempt to > assign or pass a value with one dynamic type to a variable declared to be > of some different specific type. No hunting necessary. My (somewhat dogmatic) belief is that a program shouldn't fail with a typing error at run-time. Types are a mechanism introduced to allow the compiler to catch errors. This is a pure opinion, based on nothing more than the void in my head ;) - but I think I could justify it, though with much handwaving. > The results > presented by Calder and Grunwald in their 1994 POPL paper are different. > Among other things, they measured: > > 1. the percentage of indirect function calls executed during their > experiments that invoked methods that were never overridden > (a condition that it is, in principle, straightforward to detect > at link time, but not sooner) > > 2. the percentage of indirect function calls executed during their > experiments at call sites from which only one target was invoked > during THAT execution of the program > > ... > > Calder and > Grunwald describe Measurement 2 as an upper bound on the number of > dynamic calls that could be turned into static calls by powerful > interprocedural optimizations, and Measurement 1 as the lower bound we > would expect of such optimizations, since that much improvement can be > achieved even by a naive interprocedural optimization. The mean values, > over the programs they measured, were 31.6% for Measurement 1 and 66.3% > for Measurement 2, a far cry from your Booch's 80%. Umm, I cannot really comment on this - don't have such detailed statistics. Anyway, I'd be interested to know how many of the Measurement 2 calls that were not in Measurement 1 would have been present in an Eiffel program. I.e. to actually determine the utility of explicit frozen (= final) declarations, it would be necessary to determine 1) how many calls aren't made static by the Eiffel compiler that could have been made static 2) how many calls that would have been made static by the developer would lead to a run-time error or make the life of other developers more difficult. I don't see an easy way to give solid numbers. The first criterion is one that vanishes as soon as it is measured (any tool that measured the number could immediately be converted to a data-flow analysis phase of the compiler), and the second one cannot be determined without introducing ideological debate. The Eiffel standpoint is that such optimizations introduce a source of errors, and that they can be done by the compiler (in the light of the above argument, it would be "*most* can be done by the compiler"). This is a standpoint that influences many language decisions, not just the question of static (early) binding. In effect, these decisions take a lot of burden from the application programmer and put them upon the compiler writer (which makes writing a good Eiffel compiler considerably harder - but this is more of a disadvantage wrt. C++ than Ada). > In other words, the optimization will > seem more "worthwhile" to Eiffel programmers than to Ada programmers > because Eiffel programs need it more badly. Agreed. > |> In Eiffel, the programmer "declares" an object to belong to a specific > |> type by not assigning any object of a descendant type to it. I think this > |> is less error-prone and requires less thinking about an otherwise > |> uninteresting detail. > > Of course this is not a declaration at all. It is a run-time property > that may or may not be determinable at compile time; and if determinable > at all, may be determinable only by sophisticated analysis. I think most cases can be determined without doing any semantic analysis. This simply because I think most such declarations are routinely applied by the programmer, using standard practice, which should be just the cases that can be statically determined. This argument will, of course, only hold as long as I assume that the 33% difference between measurements 1 and 2 is specific to C++. One should really do such a measurement with an Eiffel program. Another source of non-optimizations may be the cases where there is a polymorphic function with a small and fixed number of variants. It may be worthwile to replace this function by a set of non-polymorphic functions that are statically linked. But that's probably something that's an issue for research right now. > In a previous post, I wrote > > |> > A classwide routine is, by definition, one whose > |> > correctness does not depend on the dynamic types of the objects it is > |> > dealing with. > > and Joachim responded: > > |> True. But then, how do I know that? I think cases where I'm sure that > |> this will hold are very rare. And there is always the possibility that > |> somebody will extend the class in a way that I never anticipated, and in > |> such a case, it is quite likely that routines that I considered classwide > |> are anything but classwide in the new design. > > Formally, you know that because the correctness of the classwide routine > depends only on the specifications (preconditions and postconditions) of > the (root versions of) the methods it invokes. Agreed. But sometimes I want to replace a routine for efficiency reasons, without altering the outside semantics (e.g. because I can utilize a better data structure in a descendant that wasn't available in the parent). > No, the proper solution is to add the classwide function > > function Geometry_Of > (Graphic: Graphic_Type'Class) return Shape_Pointer_Type is > begin > return Graphic.Shape; > end Geometry_Of; > > (and possibly a Set_Geometry_Of procedure). Then, to test whether > Graphic_Type instances G1 and G2 have congruent polygons, we write > > Congruent (Geometry_Of(G1).all, Geometry_Of(G2).all) Yes, this is a valid solution. Still, it requires me to use a client relationship, for purely technical reasons. Example: Image there is an existing system of several zillion lines that uses the Shape_Type class, but without the filled shapes. For reasons that cannot be discussed away (i.e. usually for customer demand ) the existing class structure has to be extended with the filled shapes. The above solution would require me to rewrite lots of calls to Congruent (and many other routines). To get away from the specifics of the Shape_Type example, I'd like to take a step back and take a broader look at the problem: Let's have a base class BASE with some equivalence relation BASE.equivalent(BASE): BOOLEAN. In some descendant types, the equivalence conditions will be modified. The equivalence conditions will vary among the descendants of BASE: Some will have conditions that differ from those of BASE, others will share their conditions with other classes in the descendance graph. In the case of shared conditions, the equivalent() routine should be declared only in the base class of the class group and inherited in the other classes. When comparing elements of groups that don't have identical equivalence conditions, the comparison may either always return false or - if the classes inherit from each other - the objects may be compared according to the equivalent() routine of the class that is higher up in the class hierarchy. I can't give an implementation of this in Eiffel right now - it's too late already. I'll give it a try this weekend. Anybody who's more awake right now is invited to implement this . -Joachim -- Im speaking for myself here.