From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,9c86eb13dd395066 X-Google-Attributes: gid103376,public From: bobduff@world.std.com (Robert A Duff) Subject: Re: CRC in Ada? Date: 1997/03/06 Message-ID: #1/1 X-Deja-AN: 223663314 References: <1997Mar2.220652@nova.wright.edu> <1997Mar5.131846.1@eisner> <5fmo1k$adm@mulga.cs.mu.OZ.AU> <1997Mar6.114441.1@eisner> Organization: The World Public Access UNIX, Brookline, MA Newsgroups: comp.lang.ada Date: 1997-03-06T00:00:00+00:00 List-Id: In article <1997Mar6.114441.1@eisner>, Larry Kilgallen wrote: >If a programmer _assumes_ that such a construct will be efficient, >when in fact it is _not_ efficient within a particular environment, >it is a mistake from a performance perspective. OK, but where do you draw the line? Suppose I'm trying to write portable software. I measure the performance on my current OS/compiler/libraries, and it's fine. Can I assume that it will still perform as expected on any other platform? There don't seem to be any performance requirements in most standards. E.g. an Ada compiler that implements assignment of integers by first copying each bit one-by-one, and then doing 10,000 no-op instructions is perfectly valid, according to the RM. But when I write code, I assume that (32-bit) integer assignment, on a 100MHz machine, takes something like 1/(10**8) seconds, give or take. Assuming I don't get a cache miss or page fault. I assume it's not quadratic in the size of the variable name, or some such oddity. Why should I not assume that a get-character-from-file operation takes more than a few instructions, on average? I know how to implement such an operation, and it's not hard. If it turns out that some OS does it slowly, then I feel righteous in blaming the OS. (Of course, from a practical point of view, if that OS is popular, then I'll have to live with its limitations.) Remember this [sub]thread started with the claim that VMS folks had to go to a lot of trouble to support those "evil" C programmers who wanted to do char-by-char input. My response is: Good, somebody forced them to do what they should do -- namely, implement char-by-char input efficiently. >I have run into programmers making this mistake over and over again. >In recent years their immediate response has been "Gee, it runs fast >on Unix", but in prior years their response was "Gee, it runs fast >on MVS". Obviously it is only the recent history where the C language >is involved, but the current generation seems much more surprised than >their MVS-centric predecessors. Seems pretty reasonable to me: if Unix or MVS can do it fast, why can't whatever-OS-we're talking about do it fast? If not, it seems like the fault of the OS, or the standard libraries implementation. >An analogy would be developers who find their MS-DOS game cannot >write directly to the screen under Windows NT. That is a bit >rougher, as one has to start from scratch explaining the difference >between an operating system and a run-time library :-) I don't buy the analogy. If OS X can do some operation fast, and OS Y does it much slower, then that's the fault of OS Y. (Assuming both OS's are real operating systems -- that is, provide protection of address spaces and so forth.) The write-directly-to-screen thing is different -- as you say, one needs to explain the difference between MS-DOS and an operating system. - Bob