From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: fac41,9a0ff0bffdf63657 X-Google-Attributes: gidfac41,public X-Google-Thread: f43e6,9a0ff0bffdf63657 X-Google-Attributes: gidf43e6,public X-Google-Thread: 103376,4b06f8f15f01a568 X-Google-Attributes: gid103376,public X-Google-Thread: 1108a1,9a0ff0bffdf63657 X-Google-Attributes: gid1108a1,public From: Charles Hixson Subject: Re: Software landmines (loops) Date: 1998/09/04 Message-ID: <35F0A288.F0AF78C2@earthlink.net>#1/1 X-Deja-AN: 388046576 Content-Transfer-Encoding: 7bit References: <902934874.2099.0.nnrp-10.c246a717@news.demon.co.uk> <6r1glm$bvh$1@nnrp1.dejanews.com> <6r9f8h$jtm$1@nnrp1.dejanews.com> <6renh8$ga7$1@nnrp1.dejanews.com> <6rf59b$2ud$1@nnrp1.dejanews.com> <6rfra4$rul$1@nnrp1.dejanews.com> <35DBDD24.D003404D@calfp.co.uk> <6sbuod$fra$1@hirame.wwa.com> <35f51e53.48044143@ <904556531.666222@miso.it.uq.edu.au> <35EAEC47.164424A7@s054.aone.net.au> <35EBBFAF.DE38C061@s054.aone.net.au> <35EC28BD.351F33DF@s054.aone.net.au> <35EED7D3.3CFDA8A4@iname.com> X-Posted-Path-Was: not-for-mail Content-Type: text/plain; charset=us-ascii X-ELN-Date: Fri Sep 4 19:28:46 1998 Organization: Mandala Fluteworks Mime-Version: 1.0 Newsgroups: comp.lang.eiffel,comp.object,comp.software-eng,comp.lang.ada Date: 1998-09-04T00:00:00+00:00 List-Id: Joe Gamache wrote: > > Matthew Heaney wrote: > > > Loryn Jenkins writes: > > ... It seems to me that at least a part of this is about how much optimizing you assume that the compilers do. If you assume that they aren't doing much, then you try to write code that will execute efficiently. If you assume that they do LOTS of optimization, then you write code that's maximally clean, and leave it to the compiler to figure out what's most efficient. Of course this is only part of it, but I think that changes in the algorithm that scale linearly are generally overwhelmed by differences between compiler implementations, and also that the best code for this year may not be the best code for next year. I am particularly leery of using constructs like "and then" to optimize performance. That works this year, but two years from not the parallel version of the code will hit a linearizing bottleneck. (Well, two years may be optimistic, but if the code ends up in a library, two years is nothing to the changes that it may need to survive). Perhaps what is needed is a formalization of the (Eiffels?) Command-Query Separation so that compilers can choose to abort a query in progress if it's returned value would be irrelevant. And perhaps the order of the arguments to a boolean operator could reflect the programmers best guess as to which it would be most profitable to try first (I know it is frequently implemented this way, but if it were formalized, then it would become something that both programmers and compilers could depend on, and this could be helpful).