From mboxrd@z Thu Jan 1 00:00:00 1970 X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on polar.synack.me X-Spam-Level: X-Spam-Status: No, score=-1.3 required=5.0 tests=BAYES_00,INVALID_MSGID autolearn=no autolearn_force=no version=3.4.4 X-Google-Language: ENGLISH,ASCII-7-bit X-Google-Thread: 1164f8,b3c24209310418d0 X-Google-Attributes: gid1164f8,public X-Google-Thread: 103376,b3c24209310418d0 X-Google-Attributes: gid103376,public X-Google-Thread: fee84,b3c24209310418d0 X-Google-Attributes: gidfee84,public From: gwinn@ma.ultranet.com (Joe Gwinn) Subject: Re: Timing Ada programs using the DEC UNIX microtimer kernel option Date: 1998/04/25 Message-ID: #1/1 X-Deja-AN: 347674698 References: <6hsab5$rh1$1@eplet.mira.net.au> X-Ultra-Time: 25 Apr 1998 22:05:16 GMT X-Complaints-To: abuse@ultra.net Organization: Gwinn Instruments Newsgroups: comp.unix.osf.osf1,comp.sys.dec,comp.lang.ada Date: 1998-04-25T00:00:00+00:00 List-Id: In article <6hsab5$rh1$1@eplet.mira.net.au>, dccoote@werple.mira.net.au (David Coote) wrote: > I got all excited when I found out about the microtimer kernel option. Timing > measurements we did with this option installed did seem to indicate improved > resolution. Interestingly enough, the measured overhead of various POSIX > timing calls seemed to improve. (Judging from how long the call took using > other timing probes.) > > But I'm not sure how this kernel option works. DEC USA support and what > documentation exists tells me that you have a tick of 1024Hz on an AS500 and > 1200Hz on an AS4100. With the microtimer kernel option installed (quoting > from a DEC support email) > > "As for micro-timer resolution - the clock resolution remains the same - > 1/1024; however, any call to clock_gettime(3) is guaranteed to be > monotonically increasing with a granularity of 1 microsecond. (i.e. the clock > is still ticking at 1/1024, but clock_gettime(3) calls return values that are > unique and APPEAR [DEC's emphasis] to have 1 microsecond resolution. This is > useful for critical timestamping.)" > > Well, how do they do this monotonic increase? If the appropriate parts of the > kernel are getting tickled every 1024Hz, how is the kernel returning finer > resolution between ticks? This is a common way to ensure a strictly monotonic clock, needed so that one can use timestamps to uniquely determine order of events, even if the available clock hardware isn't all that great. A POSIX.1b time structure contains two 32-bit unsigned integer fields, the first field being seconds since 00:00:00 UTC 1 January 1970 AD (UNIX timescale origin), and the second field being decimal nanoseconds into the current second. Many UNIX systems have much the same setup, except that the nanoseconds longword instead contains microseconds into the current second. The monotonic-clock implementation used by DEC and others is that every time the 1024-Hz tick arrives, the microsecond count is incremented by 1,024 microseconds, or the nanosecond count is incremented by 1,024,000 nanoseconds. In addition, whenever the clock is read, the lsb of the microsecond (or nanosecond) longword is incremented using an atomic-increment machine instruction. So, timestamps are a combination of a time and a serial number. The note from DEC didn't say if the serial number portion is ever zeroed, but I would guess that it is zeroed on every clock tick. Another approach (not used by DEC for the systems mentioned) is to have actual microsecond clock hardware. A typical approach is to have a time chip generating a tick every 10 milliseconds; the ticks are used to update the POSIX.1b timestruct by 10 mS each time. Whenever somebody asks for the time, the timer hardware is read and the result added to the POSIX.1b time of the last tick, so the resulting POSIX.1b time has true microsecond resolution. Joe Gwinn