From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,8f8cea8602e61aba X-Google-Attributes: gid103376,public From: bobduff@world.std.com (Robert A Duff) Subject: Re: Overload Resolution in Ada (Re: The Red Language) Date: 1997/09/19 Message-ID: #1/1 X-Deja-AN: 273834716 References: <340E2DC5.25D7@worldnet.att.net> Organization: The World Public Access UNIX, Brookline, MA Newsgroups: comp.lang.ada Date: 1997-09-19T00:00:00+00:00 List-Id: In article , Brian Rogoff wrote: >My apologies for the off topic post... Heh? Surely you jest! >Out of curiosity, what is the worst case running time(s) of the two pass >algorithm(s)? Umm, I don't know. I guess you can make it arbitrarily slow. Are you asking for O(f(N)), where N is the number of nodes in the expression tree (in the "complete context", which is a single statement or declaration or whatever). Well, I don't think you can answer that usefully, because it's also affected by the number of currently visible declarations, and by the symbol table organization, and by the details of the identifier lookup mechanism, and by the representation chosen for "sets of possible interpretations". E.g.: procedure P(X: T_1); procedure P(X: T_2); ... procedure P(X: T_10_000); X: T1; P(X); Resolving the procedure call is probably going to have to look at 10,000 declarations of P, and we can make it arbitrarily worse without changing the size of the procedure call. On the other hand, the compiler could make this particular sort of case much faster using various tricks. I suppose you could say the 2-pass algorithm looks at each node O(N) times (at most twice each), but that's a useless measure, since the "look at node" operation is an enormously complicated algorithm. Symbol table lookups in compilers often have pretty good average time, but horrible worst-case time. A hash table is linear in the worst case, for example, making a compiler O(M*N) overall (even without overloading), where M is the number of decls, and N is the number of references. A compiler isn't a hard real-time program, so people tend to worry about average or typical case times more than worst case times. Well, maybe it's not totally useless. I guess I would worry if the compiler had to look at each node O(N**2) times. Hmm. N isn't usually very big in Ada, since it's not an expression language, but it can be -- I've seen things like "X := (... 10_000 component positional aggregate ...);" in machine-generated code. Note that Ada has a special rule for aggregates -- the compiler has to figure out the type of the aggregate *before* looking inside it. Anyway, in an *optimizing* compiler, overload resolution is probably not the bottleneck. - Bob