From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 103376,71b19e01eae3a390 X-Google-Attributes: gid103376,public From: rracine@draper.com (Roger Racine) Subject: Re: delay until and GNAT - expand Date: 1999/05/11 Message-ID: #1/1 X-Deja-AN: 476580820 Sender: nntp@news.draper.com (NNTP Master) References: <7gpukr$s82$1@nnrp1.dejanews.com> <7grkbb$cee$1@nnrp1.deja.com> <7grvka$lc5$1@nnrp1.deja.com> <7h1e10$drg$1@nnrp1.deja.com> In article <7h830a$e4$1@nnrp1.deja.com> Robert Dewar writes: >In article , > rracine@draper.com (Roger Racine) wrote: >> I just did a little checking on Wind Rivers' web site, and I >> found, for the "MV167C" Motorola 68K board (they do not >> specify which chip or speed in the web page), "context >> switching requires only 3.8 microseconds". Not exactly 1 >> microsecond, but I got the order of magnitude correct. >I would say this shows you likely got the order of magnitude >one off. Implementation of a delay involves more than a simple >context switch. As I say, we will measure the WR VXW speed >on a fast machine and see what we get. I am willing to bet >you are way off in the one microsecond estimate. Yes, it would >be nice if it were only one microsecond, but I am afraid we will >not see it. Anyway, let's wait till we can get some data here. Since I did not have a good real-time OS handy, I decided to check Windows NT, decidedly -not- a good real-time OS, but it was right on my desk. As mentioned before, I had in the recent past been using the debugger in this area, and remembered single stepping (in assembler) through the delay statement. So I did it again, and counted the "stepi"s. >From the initial "pushl" to set up the call to the delay routine, to the return from the call, it took 654 "stepi"s, which I assume is the same thing as 654 instructions. By the way, the vast majority of those instructions were in NT system calls, and would not be anywhere near that many in a real-time OS (it should not take close to 200 instructions to read the current time, for example). Unfortunately, this is somewhat meaningless, since the Pentium is not exactly a RISC processor, and a single instruction can take many clock cycles. But it does indicate that, even in a desktop OS, we are not talking about thousands of instructions. As Robert Dewar has pointed out (in a separate email conversation), there is also a great dependency on how the processor is used. Is the MMU used? For every task switch? How much cache is there?. So it is truly difficult to provide the Reference Manual metrics in this area, without giving a large list of assumptions. So I will still maintain that I could (and will, when I can get my hands on a processor board and real-time OS) get the number down to 1 microsecond. But I will also agree that Robert could use a different setup, and could get the number up to 10 microseconds using the same OS and processor board. And that, of course, does not include any foolish programming (like having a task disable preemption for a millisecond or 2). So why were these issues not debated when the Reference Manual was being reviewed? What type of metric were the writers thinking about? Or did they expect an answer like "between 350 and 3500 clock cycles, depending on your use of the processor"? Roger Racine