From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,9c86eb13dd395066 X-Google-Attributes: gid103376,public From: Jim Balter Subject: Re: CRC in Ada? Date: 1997/03/11 Message-ID: <3325DD47.565E@netcom.com>#1/1 X-Deja-AN: 224778216 References: <1997Mar2.220652@nova.wright.edu> <1997Mar5.083233.1@eisner> <1997Mar5.131846.1@eisner> <3324A8B9.4A18@netcom.com> Organization: JQB Enterprises X-NETCOM-Date: Tue Mar 11 2:33:34 PM PST 1997 Newsgroups: comp.lang.ada Date: 1997-03-11T14:33:34-08:00 List-Id: Robert Dewar wrote: > > Jim Balter said > > < getchar macro, which is virtually the same on every single system? > I've done this repeatedly over the last nearly 20 years on many many > systems. There is a small overhead per byte due to the getchar > macro, which is reduced with good optimizing compilers and good > caching hardware. For the inner loop of "cat", the getchar cost > predominates. For anything else, it doesn't. This is basic > algorithmic analysis, which you can find good references for in your > local library, if you ever bother to head in that direction.>> > > (replying to me) > > Very curious, your post EXACTLY agrees with my point, which is that > there is extra overhead, even in C in going character by character, > and you even go so far as to say (further than I went) that there > can be programs in which this effect is dominant. It take an unusual degree of intellectual dishonesty to misrepresent one's own point. No more talk here of extra system calls, buffering not being mandated, or the need to go out and empirically check 6 implementations. > Well of course your claim that ONLY cat can see this effect is > over-headed hyperbole, It would be if I claimed that; the only claim was about predominance, i.e., the major factor in the cost. > but there is a real difference, and for > example, in many of the compilers I have written in C, I have > found that the overall compilation time is noticably affected by > the choice of reading character by character or reading blocks. > A character read is going to do at least one pipe-line breaking > test (or one should say potentially pipe-line breaking test), > and it is not going to be free. The point is and has been a radical overstatement of the cost, as though it is just the luck of the draw whether a getchar call might do an extra system call, and you have to go out and empirically check 6 different systems to find out. Of course if (c = (--n < 0? fetch() : *p++)) == EOF break; is potentially more costly than if (i == n) break; c = buf[i++]; but it is not the sort of "mistake" to code this way that some have made it out to be. The hyperbole is on the other side. > That was my point, and you seem to completely agree, and I really > don't see what algorithmic analysis has to do with the situation, > since we are talking O(N) in any case, i.e. we are discussing > constants, not algorithmic complexities The magnitude of constants is part of algorithmic analysis. --