From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,9c86eb13dd395066 X-Google-Attributes: gid103376,public From: ohk@edeber.tfdt-o.nta.no (Ole-Hjalmar Kristensen FOU.TD/DELAB) Subject: Re: CRC in Ada? Date: 1997/03/06 Message-ID: #1/1 X-Deja-AN: 223461243 References: <1997Mar2.220652@nova.wright.edu> Organization: Telenor Online Public Access Newsgroups: comp.lang.ada Date: 1997-03-06T00:00:00+00:00 List-Id: In article <1997Mar5.131846.1@eisner> kilgallen@eisner.decus.org (Larry Kilgallen) writes: In article , bobduff@world.std.com (Robert A Duff) writes: > In article <1997Mar5.083233.1@eisner>, > Larry Kilgallen wrote: >>For Ada, I would guess that implementors have not counted on >>people using C-like programming constructs (vs. doing large >>buffer reads as suggested earlier in this thread). >> >>It is certainly possible to build an Ada environment optimized for >>single-character reads, but that would not seem to be a priority >>for most Ada compiler customers. > > I think I disagree. Buffering should be the job of the OS and/or > standard libraries. Not every program. While theoretical computing and Alan Turing may be centered on the equivalence and correctness of programs, in the real world performance is also a consideration. Just as those using intense computation will be concerned about the efficiency of their algorithms, those accessing large databases will take care regarding order-of-access so as not to go skipping over the whole thing when some locality-of-reference would yield better performance. Those reading sequentially from disk should likewise concern themselves with performance, and 512 calls to even the lightest-weight library is too much if a single call would do. For the stated problem (CRC) reading large blocks is a clear win. I think you are missing something here. Altough in Unix it IS possible to do reads of arbitrary length, the standard IO library of C definitely does IO in blocks. The getc/putc functions are usually implemented as macros, which just manipulate the buffer. Of course it is possible to do it faster yourself, but the interface is both efficient and simple to use. If you want to, you can even set the buffer size. You may judge for yourself: #define getc(p) (--(p)->_cnt < 0 ? __filbuf(p) : (int)*(p)->_ptr++) #define putc(x, p) (--(p)->_cnt < 0 ? __flsbuf((unsigned char) (x), (p)) \ : (int)(*(p)->_ptr++ = (x))) > I'm not sure why reading a file character-by-character is "C-like". > It seems like the natural way to write lots of programs, in any > language. The underlying language and OS should ensure that it can be > done efficiently (by making the "give-me-a-char" routine read from a > buffer whenever appropriate). In my experience it is generally C programmers who make this mistake. Perhaps it because many of them come from a Unix background where there is no strong sense of a "record". On the other hand, it may just be that there are so many C programmers that statistically speaking most of the mistakes made will be made by a C programmer. Larry Kilgallen It should be no harder to implement the putc/getc funtions on any other OS which allows you to do character IO in blocks, than it is in Unix. It may be a mistake in some cases, but talking about "this mistake" is a vast oversimplification. Surely, there is nothing conceptually wrong with having a set of single character IO operations like putc, getc, and ungetc? Ole-Hj. Kristensen