comp.lang.ada
 help / color / mirror / Atom feed
From: gwinn@ma.ultranet.com (Joe Gwinn)
Subject: Re: Timing Ada programs using the DEC UNIX microtimer kernel option
Date: 1998/04/25
Date: 1998-04-25T00:00:00+00:00	[thread overview]
Message-ID: <gwinn-2504981807010001@d45.dial-4.cmb.ma.ultra.net> (raw)
In-Reply-To: 6hsab5$rh1$1@eplet.mira.net.au


In article <6hsab5$rh1$1@eplet.mira.net.au>, dccoote@werple.mira.net.au
(David Coote) wrote:

> I got all excited when I found out about the microtimer kernel option. Timing 
> measurements we did with this option installed did seem to indicate improved 
> resolution. Interestingly enough, the measured overhead of various POSIX 
> timing calls seemed to improve. (Judging from how long the call took using 
> other timing probes.)
> 
> But I'm not sure how this kernel option works. DEC USA support and what 
> documentation exists tells me that you have a tick of 1024Hz on an AS500 and 
> 1200Hz on an AS4100. With the microtimer kernel option installed (quoting 
> from a DEC support email)
> 
> "As for micro-timer resolution - the clock resolution remains the same - 
> 1/1024; however, any call to clock_gettime(3) is guaranteed to be 
> monotonically increasing with a granularity of 1 microsecond. (i.e. the clock 
> is still ticking at 1/1024, but clock_gettime(3) calls return values that are 
> unique and APPEAR [DEC's emphasis] to have 1 microsecond resolution. This is 
> useful for critical timestamping.)"
> 
> Well, how do they do this monotonic increase? If the appropriate parts of the 
> kernel are getting tickled every 1024Hz, how is the kernel returning finer 
> resolution between ticks?

This is a common way to ensure a strictly monotonic clock, needed so that
one can use timestamps to uniquely determine order of events, even if the
available clock hardware isn't all that great.  

A POSIX.1b time structure contains two 32-bit unsigned integer fields, the
first  field being seconds since 00:00:00 UTC 1 January 1970 AD (UNIX
timescale origin), and the second field being decimal nanoseconds into the
current second.  Many UNIX systems have much the same setup, except that
the nanoseconds longword instead contains microseconds into the current
second.

The monotonic-clock implementation used by DEC and others is that every
time the 1024-Hz tick arrives, the microsecond count is incremented by
1,024 microseconds, or the nanosecond count is incremented by 1,024,000
nanoseconds.  In addition, whenever the clock is read, the lsb of the
microsecond (or nanosecond) longword is incremented using an
atomic-increment machine instruction.  So, timestamps are a combination of
a time and a serial number.  The note from DEC didn't say if the serial
number portion is ever zeroed, but I would guess that it is zeroed on
every clock tick.

Another approach (not used by DEC for the systems mentioned) is to have
actual microsecond clock hardware.  A typical approach is to have a time
chip generating a tick every 10 milliseconds; the ticks are used to update
the POSIX.1b timestruct by 10 mS each time.  Whenever somebody asks for
the time, the timer hardware is read and the result added to the POSIX.1b
time of the last tick, so the resulting POSIX.1b time has true microsecond
resolution.

Joe Gwinn




  reply	other threads:[~1998-04-25  0:00 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
1998-04-25  0:00 Timing Ada programs using the DEC UNIX microtimer kernel option David Coote
1998-04-25  0:00 ` Joe Gwinn [this message]
1998-04-26  0:00   ` Jonathan Stone
1998-04-26  0:00     ` David Coote
1998-04-28  0:00     ` Jeffrey Mogul
1998-04-27  0:00 ` Juan Zamorano Flores
1998-05-01  0:00   ` Short DOSish note (was Re: Timing Ada programs using the DEC UNIX microtimer kernel option) John M. Mills
1998-05-01  0:00     ` Jerry van Dijk
replies disabled

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox