From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,9c86eb13dd395066 X-Google-Attributes: gid103376,public From: fjh@mundook.cs.mu.OZ.AU (Fergus Henderson) Subject: Re: CRC in Ada? Date: 1997/03/06 Message-ID: <5fmo1k$adm@mulga.cs.mu.OZ.AU>#1/1 X-Deja-AN: 223533981 References: <1997Mar2.220652@nova.wright.edu> <331d3cf9.1190126@news.logica.co.uk> <1997Mar5.083233.1@eisner> <1997Mar5.131846.1@eisner> Organization: Comp Sci, University of Melbourne Newsgroups: comp.lang.ada Date: 1997-03-06T00:00:00+00:00 List-Id: kilgallen@eisner.decus.org (Larry Kilgallen) writes: >bobduff@world.std.com (Robert A Duff) writes: >> >> I think I disagree. Buffering should be the job of the OS and/or >> standard libraries. Not every program. > >While theoretical computing and Alan Turing may be centered on >the equivalence and correctness of programs, in the real world >performance is also a consideration. Sure. That's why buffering is needed. But buffering should be the job of the OS and/or standard libraries. >Just as those using intense >computation will be concerned about the efficiency of their >algorithms, those accessing large databases will take care >regarding order-of-access so as not to go skipping over the >whole thing when some locality-of-reference would yield better >performance. Sure. No disagreement here. >Those reading sequentially from disk should >likewise concern themselves with performance, and 512 calls >to even the lightest-weight library is too much if a single >call would do. Nonsense, if the calls are inlined, and the compiler does a good job of optimization, there is no reason why the cost need be too much. >For the stated problem (CRC) reading large >blocks is a clear win. Yes, but that is true for just about all performance-intensive stream I/O tasks. That's why it makes sense for this to be done by the standard library and/or OS. >> I'm not sure why reading a file character-by-character is "C-like". >> It seems like the natural way to write lots of programs, in any >> language. The underlying language and OS should ensure that it can be >> done efficiently (by making the "give-me-a-char" routine read from a >> buffer whenever appropriate). > >In my experience it is generally C programmers who make this mistake. Why is it a mistake? If the implementation implements it efficiently, as it should, I don't see any reason to regard using such a mechanism as a mistake. -- Fergus Henderson | "I have always known that the pursuit WWW: | of excellence is a lethal habit" PGP: finger fjh@128.250.37.3 | -- the last words of T. S. Garp.