From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,ca9eef4d5e2078ea X-Google-Attributes: gid103376,public From: stt@houdini.camb.inmet.com (Tucker Taft) Subject: Re: Beware: Rep spec on an enumeration type causes code explosion Date: 1997/12/15 Message-ID: #1/1 X-Deja-AN: 298507016 Sender: news@inmet.camb.inmet.com (USENET news) References: X-Nntp-Posting-Host: houdini.camb.inmet.com Organization: Intermetrics, Inc. Newsgroups: comp.lang.ada Date: 1997-12-15T00:00:00+00:00 List-Id: Robert Dewar (dewar@merv.cs.nyu.edu) wrote: : ... : I really don't know what other compilers do here! But I suspect this whole : area may be quite compiler dependent, so I suggest that you benchmark your : particular usage against the compilers you are considering using, advice : that always makes sense! This kind of statement is exactly what makes me feel the feature should not have been in the language in the first place, or as Robert suggested, the operations that might involve mapping would be prohibited (though that means the enum. rep. clause has more effect on semantics than is usually allowed). Interestingly, the Ravenskar real-time workshop proposed a restriction that essentially means "no potentially expensive holey-enum-type operations." Of course all readers of this news thread will now be more careful in their use of the holey enum type feature. However, my preference is for language features whose (reasonably) efficient implementation is essentially intuitively obvious, with all compilers doing essentially the same thing, and all programmers having an intuitive grasp of the overhead of various operations. If you have to start running examples through a compiler to learn how to properly use a language feature on that particluar compiler, that is a bad sign... Of course nifty optimizations are nice, but they should be incremental improvements on something that is already acceptably fast. If the real-time correctness of your program begins to depend heavily on particular compiler-specific optimizations/transformations, then the "standard" language features are not what they should be. Other examples where this principle was violated are with generics, where some compilers macro-expand and others (partially or universally) share, exceptions, where some compilers introduce an overhead on entry to a handled-sequence-of-statements and others don't, on finalization, where the declaration and cleanup overhead varies dramatically, on rendezvous, where some compilers have certain special transformations and others don't, etc. These differences can dramatically affect the "right" way to use the language, and that means it really isn't a single language, but rather a set of languages that happen to share syntax. This is one case where putting more implementation guidance into the manual might help establish stronger norms. One might even go so far as to include relative timing tests into the conformance tests, where operations are required to take no more than some basic unit, such as the null procedure call time, or take no more space than that given by some formula. This is all pretty radical thinking, but for a real-time language, this might make sense. The "metrics" paragraphs in the Real-Time annex are an attempt to push in this direction, where rather than saying something felt to be "critical" had to be fast, we simply required a vendor to measure its performance. The embarrassment factor would hopefully mean that they would spend the energy to make the given "critical" operation efficient. : Robert Dewar : Ada Core Technologies -- -Tucker Taft stt@inmet.com http://www.inmet.com/~stt/ Intermetrics, Inc. Burlington, MA USA