From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,9c86eb13dd395066 X-Google-Attributes: gid103376,public From: Jim Balter Subject: Re: CRC in Ada? Date: 1997/03/10 Message-ID: <33249604.E3F@netcom.com>#1/1 X-Deja-AN: 224514714 References: <1997Mar2.220652@nova.wright.edu> <331d3cf9.1190126@news.logica.co.uk> <1997Mar5.083233.1@eisner> <1997Mar5.131846.1@eisner> <5fmo1k$adm@mulga.cs.mu.OZ.AU> <1997Mar6.114441.1@eisner> Organization: JQB Enterprises X-NETCOM-Date: Mon Mar 10 5:17:21 PM CST 1997 Newsgroups: comp.lang.ada Date: 1997-03-10T17:17:21-06:00 List-Id: Larry Kilgallen wrote: > If a programmer _assumes_ that such a construct will be efficient, > when in fact it is _not_ efficient within a particular environment, > it is a mistake from a performance perspective. This same argument was used to justify assembly language programming for decades. A programmer should assume that her tools are not broken, and that she is not dealing with an anomalous case with unusually poor performance, until and unless project requirements and empirical measurements indicate otherwise. > I have run into programmers making this mistake over and over again. > In recent years their immediate response has been "Gee, it runs fast > on Unix", but in prior years their response was "Gee, it runs fast > on MVS". Obviously it is only the recent history where the C language > is involved, but the current generation seems much more surprised than > their MVS-centric predecessors. > > An analogy would be developers who find their MS-DOS game cannot > write directly to the screen under Windows NT. That is a bit > rougher, as one has to start from scratch explaining the difference > between an operating system and a run-time library :-) You are arguing against yourself. Putting screen I/O optimizations into the lower system or library layers instead of embedding them into every application is precisely the path for avoiding this sort of problem. It is your "optimized" application that will fail miserably in many environments, whereas my application that assumes an efficient underlying buffering mechanism (or an optimizing compiler instead of "clever" assembly code, or an abstract BLT operation that may well be implemented in hardware instead of hand coding, or threads that may be run in parallel processors instead of hand coded scheduling) will run well in any environment that provides one, which means in today's practice all of them. And an API is an API is an API; the line between the OS and the run-time library has been quite blurred with developments such as microkernels and dynamically linked libraries. Welcome to the modern age. --