From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.9 required=5.0 tests=BAYES_00 autolearn=ham autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,103b407e8b68350b X-Google-Attributes: gid103376,public X-Google-ArrivalTime: 2003-01-08 13:54:12 PST Path: archiver1.google.com!postnews1.google.com!not-for-mail From: dennison@telepath.com (Ted Dennison) Newsgroups: comp.lang.ada Subject: Re: Anybody in US using ADA ? One silly idea.. Date: 8 Jan 2003 13:54:10 -0800 Organization: http://groups.google.com/ Message-ID: <4519e058.0301081354.1a528d4e@posting.google.com> References: <3E147D79.2070703@cogeco.ca> <3chl1vg7p83jlgcgjndaa8n5lnh11a3l5t@4ax.com> <1041999874.693157@master.nyc.kbcfp.com> NNTP-Posting-Host: 65.115.221.98 Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 8bit X-Trace: posting.google.com 1042062851 15958 127.0.0.1 (8 Jan 2003 21:54:11 GMT) X-Complaints-To: groups-abuse@google.com NNTP-Posting-Date: 8 Jan 2003 21:54:11 GMT Xref: archiver1.google.com comp.lang.ada:32786 Date: 2003-01-08T21:54:11+00:00 List-Id: "Dmitry A. Kazakov" wrote in message news:... > Hyman Rosen wrote: > No study I know of. However, it is easy to imagine a case when a non-inlined > subroutine could be more efficient [in terms of speed] than an inlined one. > I do not care about the realism of the following example, but let we have a > sort of applet which runs some simple algorithm for many different types. > Each time you load it, only one branch for one selected type is executed. > If the algorithm is an inlined instance, the time for loading, > initialization of the paged memory, symbol relocation etc, could be longer > than the time of all dereferences and dispatch. Actually, an effect I would be even more worried about is the effect on CPU instruction caches. A non-inlined program is going to be smaller than an inlined program, perhaps significantly smaller if a lot of inlining is performed. If that causes an instruction cache miss where there would not have been one, you could find your 8 CPU-cyle "optimization" (less if you have a good Branch Prediction Unit, like most modern 32-bit processors do) actually *costing* you hundreds of CPU-cycles every time its executed. > > > That is, in the C++ model of generics, the compiler has all the type > > information available at the point where it is compiling the code, > > so it's hard to see how any other model can be *more* efficient. So > > the best you can hope for is to be as efficient. > > Yes. Only "hope" is IMO a wrong word. I think it is indeed possible to make > tagged types as efficient as fully inlined macro expansions, provided that > the corresonding class-wide routines are also inlined. > > > I think of the > > canonical case as being std::sort, which can inline comparison between > > objects, so sorting an array of integers involves just doing an inline > > machine integer compare. > > > Not only that, but the C++ model allows for specialization, so that the > > body of code in question does not all have to come from the same block of > > generic code - there can be completely different bodies for different > > types. This makes generic sharing even less likely there. > > My point was: > > - either we implement generics as macro expansions and then have > disadvantages XYZ; > > - or we share bodies, loosing some power of generics, so that a question > arise, why generics and not tagged types?